2026-03-13 00:00:10.280968 | Job console starting 2026-03-13 00:00:10.295223 | Updating git repos 2026-03-13 00:00:10.788707 | Cloning repos into workspace 2026-03-13 00:00:11.261584 | Restoring repo states 2026-03-13 00:00:11.325170 | Merging changes 2026-03-13 00:00:11.325190 | Checking out repos 2026-03-13 00:00:11.890330 | Preparing playbooks 2026-03-13 00:00:13.453232 | Running Ansible setup 2026-03-13 00:00:21.221770 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-03-13 00:00:23.260434 | 2026-03-13 00:00:23.260565 | PLAY [Base pre] 2026-03-13 00:00:23.285912 | 2026-03-13 00:00:23.286030 | TASK [Setup log path fact] 2026-03-13 00:00:23.305659 | orchestrator | ok 2026-03-13 00:00:23.325283 | 2026-03-13 00:00:23.325418 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-13 00:00:23.385414 | orchestrator | ok 2026-03-13 00:00:23.403865 | 2026-03-13 00:00:23.403978 | TASK [emit-job-header : Print job information] 2026-03-13 00:00:23.492852 | # Job Information 2026-03-13 00:00:23.493062 | Ansible Version: 2.16.14 2026-03-13 00:00:23.493113 | Job: testbed-deploy-current-in-a-nutshell-with-tempest-ubuntu-24.04 2026-03-13 00:00:23.493149 | Pipeline: periodic-midnight 2026-03-13 00:00:23.493172 | Executor: 521e9411259a 2026-03-13 00:00:23.493192 | Triggered by: https://github.com/osism/testbed 2026-03-13 00:00:23.493214 | Event ID: fcd5e110b62548aa83d02f3e7f3ac493 2026-03-13 00:00:23.509862 | 2026-03-13 00:00:23.509977 | LOOP [emit-job-header : Print node information] 2026-03-13 00:00:23.802245 | orchestrator | ok: 2026-03-13 00:00:23.802403 | orchestrator | # Node Information 2026-03-13 00:00:23.802440 | orchestrator | Inventory Hostname: orchestrator 2026-03-13 00:00:23.802469 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-03-13 00:00:23.802492 | orchestrator | Username: zuul-testbed01 2026-03-13 00:00:23.802513 | orchestrator | Distro: Debian 12.13 2026-03-13 00:00:23.802536 | orchestrator | Provider: static-testbed 2026-03-13 00:00:23.802557 | orchestrator | Region: 2026-03-13 00:00:23.802577 | orchestrator | Label: testbed-orchestrator 2026-03-13 00:00:23.802597 | orchestrator | Product Name: OpenStack Nova 2026-03-13 00:00:23.802617 | orchestrator | Interface IP: 81.163.193.140 2026-03-13 00:00:23.814613 | 2026-03-13 00:00:23.814715 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-03-13 00:00:25.031786 | orchestrator -> localhost | changed 2026-03-13 00:00:25.040694 | 2026-03-13 00:00:25.040806 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-03-13 00:00:27.468633 | orchestrator -> localhost | changed 2026-03-13 00:00:27.480626 | 2026-03-13 00:00:27.480720 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-03-13 00:00:28.184955 | orchestrator -> localhost | ok 2026-03-13 00:00:28.190533 | 2026-03-13 00:00:28.190624 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-03-13 00:00:28.244151 | orchestrator | ok 2026-03-13 00:00:28.275770 | orchestrator | included: /var/lib/zuul/builds/e7d915585cc84a62ad88b8cff0bf3e53/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-03-13 00:00:28.282164 | 2026-03-13 00:00:28.282243 | TASK [add-build-sshkey : Create Temp SSH key] 2026-03-13 00:00:33.335392 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-03-13 00:00:33.336578 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/e7d915585cc84a62ad88b8cff0bf3e53/work/e7d915585cc84a62ad88b8cff0bf3e53_id_rsa 2026-03-13 00:00:33.336634 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/e7d915585cc84a62ad88b8cff0bf3e53/work/e7d915585cc84a62ad88b8cff0bf3e53_id_rsa.pub 2026-03-13 00:00:33.336659 | orchestrator -> localhost | The key fingerprint is: 2026-03-13 00:00:33.336679 | orchestrator -> localhost | SHA256:FdmhaLN+oo73VTLTUHvCFgE/aRLNv3u6ZeJwpYixMQE zuul-build-sshkey 2026-03-13 00:00:33.336698 | orchestrator -> localhost | The key's randomart image is: 2026-03-13 00:00:33.336723 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-03-13 00:00:33.336742 | orchestrator -> localhost | | E +*=o | 2026-03-13 00:00:33.336760 | orchestrator -> localhost | | o.B++ | 2026-03-13 00:00:33.336776 | orchestrator -> localhost | | + * X.. | 2026-03-13 00:00:33.336793 | orchestrator -> localhost | | . + B +. | 2026-03-13 00:00:33.336809 | orchestrator -> localhost | | S B o ..| 2026-03-13 00:00:33.336827 | orchestrator -> localhost | | . @ ..o | 2026-03-13 00:00:33.336844 | orchestrator -> localhost | | o = o +.o| 2026-03-13 00:00:33.336861 | orchestrator -> localhost | | ... + +.+.| 2026-03-13 00:00:33.336878 | orchestrator -> localhost | | .oo.. ++ | 2026-03-13 00:00:33.336895 | orchestrator -> localhost | +----[SHA256]-----+ 2026-03-13 00:00:33.336942 | orchestrator -> localhost | ok: Runtime: 0:00:03.433153 2026-03-13 00:00:33.345007 | 2026-03-13 00:00:33.345086 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-03-13 00:00:33.385628 | orchestrator | ok 2026-03-13 00:00:33.436566 | orchestrator | included: /var/lib/zuul/builds/e7d915585cc84a62ad88b8cff0bf3e53/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-03-13 00:00:33.464147 | 2026-03-13 00:00:33.464246 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-03-13 00:00:33.511897 | orchestrator | skipping: Conditional result was False 2026-03-13 00:00:33.518797 | 2026-03-13 00:00:33.518913 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-03-13 00:00:34.144166 | orchestrator | changed 2026-03-13 00:00:34.158691 | 2026-03-13 00:00:34.158775 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-03-13 00:00:34.472371 | orchestrator | ok 2026-03-13 00:00:34.482303 | 2026-03-13 00:00:34.482419 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-03-13 00:00:34.957690 | orchestrator | ok 2026-03-13 00:00:34.963618 | 2026-03-13 00:00:34.963712 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-03-13 00:00:35.452480 | orchestrator | ok 2026-03-13 00:00:35.465869 | 2026-03-13 00:00:35.465987 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-03-13 00:00:35.534174 | orchestrator | skipping: Conditional result was False 2026-03-13 00:00:35.543299 | 2026-03-13 00:00:35.543400 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-03-13 00:00:37.033529 | orchestrator -> localhost | changed 2026-03-13 00:00:37.054604 | 2026-03-13 00:00:37.054707 | TASK [add-build-sshkey : Add back temp key] 2026-03-13 00:00:37.707589 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/e7d915585cc84a62ad88b8cff0bf3e53/work/e7d915585cc84a62ad88b8cff0bf3e53_id_rsa (zuul-build-sshkey) 2026-03-13 00:00:37.707775 | orchestrator -> localhost | ok: Runtime: 0:00:00.028968 2026-03-13 00:00:37.715426 | 2026-03-13 00:00:37.715514 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-03-13 00:00:38.320487 | orchestrator | ok 2026-03-13 00:00:38.325495 | 2026-03-13 00:00:38.325579 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-03-13 00:00:38.368271 | orchestrator | skipping: Conditional result was False 2026-03-13 00:00:38.539216 | 2026-03-13 00:00:38.539320 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-03-13 00:00:39.152230 | orchestrator | ok 2026-03-13 00:00:39.173405 | 2026-03-13 00:00:39.173499 | TASK [validate-host : Define zuul_info_dir fact] 2026-03-13 00:00:39.222600 | orchestrator | ok 2026-03-13 00:00:39.234246 | 2026-03-13 00:00:39.234567 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-03-13 00:00:40.016328 | orchestrator -> localhost | ok 2026-03-13 00:00:40.022512 | 2026-03-13 00:00:40.022607 | TASK [validate-host : Collect information about the host] 2026-03-13 00:00:41.888921 | orchestrator | ok 2026-03-13 00:00:41.936280 | 2026-03-13 00:00:41.936380 | TASK [validate-host : Sanitize hostname] 2026-03-13 00:00:42.057028 | orchestrator | ok 2026-03-13 00:00:42.064906 | 2026-03-13 00:00:42.065005 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-03-13 00:00:43.954784 | orchestrator -> localhost | changed 2026-03-13 00:00:43.959916 | 2026-03-13 00:00:43.960007 | TASK [validate-host : Collect information about zuul worker] 2026-03-13 00:00:44.673045 | orchestrator | ok 2026-03-13 00:00:44.679457 | 2026-03-13 00:00:44.679547 | TASK [validate-host : Write out all zuul information for each host] 2026-03-13 00:00:46.307826 | orchestrator -> localhost | changed 2026-03-13 00:00:46.327616 | 2026-03-13 00:00:46.329126 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-03-13 00:00:46.694082 | orchestrator | ok 2026-03-13 00:00:46.699643 | 2026-03-13 00:00:46.699733 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-03-13 00:02:13.776808 | orchestrator | changed: 2026-03-13 00:02:13.777026 | orchestrator | .d..t...... src/ 2026-03-13 00:02:13.777061 | orchestrator | .d..t...... src/github.com/ 2026-03-13 00:02:13.777085 | orchestrator | .d..t...... src/github.com/osism/ 2026-03-13 00:02:13.777107 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-03-13 00:02:13.777148 | orchestrator | RedHat.yml 2026-03-13 00:02:13.815495 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-03-13 00:02:13.815687 | orchestrator | RedHat.yml 2026-03-13 00:02:13.815772 | orchestrator | = 1.53.0"... 2026-03-13 00:02:28.181924 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-03-13 00:02:28.323486 | orchestrator | - Installing hashicorp/local v2.7.0... 2026-03-13 00:02:28.904442 | orchestrator | - Installed hashicorp/local v2.7.0 (signed, key ID 0C0AF313E5FD9F80) 2026-03-13 00:02:28.964112 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-03-13 00:02:29.410929 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-03-13 00:02:29.470002 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-03-13 00:02:30.148120 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-03-13 00:02:30.148275 | orchestrator | 2026-03-13 00:02:30.148286 | orchestrator | Providers are signed by their developers. 2026-03-13 00:02:30.148292 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-03-13 00:02:30.148309 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-03-13 00:02:30.148361 | orchestrator | 2026-03-13 00:02:30.148371 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-03-13 00:02:30.148387 | orchestrator | selections it made above. Include this file in your version control repository 2026-03-13 00:02:30.148394 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-03-13 00:02:30.148409 | orchestrator | you run "tofu init" in the future. 2026-03-13 00:02:30.149663 | orchestrator | 2026-03-13 00:02:30.149754 | orchestrator | OpenTofu has been successfully initialized! 2026-03-13 00:02:30.149791 | orchestrator | 2026-03-13 00:02:30.149796 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-03-13 00:02:30.149801 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-03-13 00:02:30.149806 | orchestrator | should now work. 2026-03-13 00:02:30.149810 | orchestrator | 2026-03-13 00:02:30.149814 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-03-13 00:02:30.149818 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-03-13 00:02:30.149830 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-03-13 00:02:30.323557 | orchestrator | Created and switched to workspace "ci"! 2026-03-13 00:02:30.323616 | orchestrator | 2026-03-13 00:02:30.323622 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-03-13 00:02:30.323628 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-03-13 00:02:30.323634 | orchestrator | for this configuration. 2026-03-13 00:02:30.440164 | orchestrator | ci.auto.tfvars 2026-03-13 00:02:30.446073 | orchestrator | default_custom.tf 2026-03-13 00:02:31.582714 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-03-13 00:02:32.131311 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-03-13 00:02:32.405827 | orchestrator | 2026-03-13 00:02:32.405896 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-03-13 00:02:32.405904 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-03-13 00:02:32.405909 | orchestrator | + create 2026-03-13 00:02:32.405914 | orchestrator | <= read (data resources) 2026-03-13 00:02:32.405919 | orchestrator | 2026-03-13 00:02:32.405923 | orchestrator | OpenTofu will perform the following actions: 2026-03-13 00:02:32.405934 | orchestrator | 2026-03-13 00:02:32.405939 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-03-13 00:02:32.405944 | orchestrator | # (config refers to values not yet known) 2026-03-13 00:02:32.405949 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-03-13 00:02:32.405953 | orchestrator | + checksum = (known after apply) 2026-03-13 00:02:32.405958 | orchestrator | + created_at = (known after apply) 2026-03-13 00:02:32.405962 | orchestrator | + file = (known after apply) 2026-03-13 00:02:32.405966 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.405988 | orchestrator | + metadata = (known after apply) 2026-03-13 00:02:32.405993 | orchestrator | + min_disk_gb = (known after apply) 2026-03-13 00:02:32.405997 | orchestrator | + min_ram_mb = (known after apply) 2026-03-13 00:02:32.406002 | orchestrator | + most_recent = true 2026-03-13 00:02:32.406006 | orchestrator | + name = (known after apply) 2026-03-13 00:02:32.406010 | orchestrator | + protected = (known after apply) 2026-03-13 00:02:32.406042 | orchestrator | + region = (known after apply) 2026-03-13 00:02:32.406049 | orchestrator | + schema = (known after apply) 2026-03-13 00:02:32.406053 | orchestrator | + size_bytes = (known after apply) 2026-03-13 00:02:32.406057 | orchestrator | + tags = (known after apply) 2026-03-13 00:02:32.406062 | orchestrator | + updated_at = (known after apply) 2026-03-13 00:02:32.406066 | orchestrator | } 2026-03-13 00:02:32.406073 | orchestrator | 2026-03-13 00:02:32.406078 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-03-13 00:02:32.406082 | orchestrator | # (config refers to values not yet known) 2026-03-13 00:02:32.406086 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-03-13 00:02:32.406091 | orchestrator | + checksum = (known after apply) 2026-03-13 00:02:32.406095 | orchestrator | + created_at = (known after apply) 2026-03-13 00:02:32.406099 | orchestrator | + file = (known after apply) 2026-03-13 00:02:32.406103 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.406107 | orchestrator | + metadata = (known after apply) 2026-03-13 00:02:32.406111 | orchestrator | + min_disk_gb = (known after apply) 2026-03-13 00:02:32.406115 | orchestrator | + min_ram_mb = (known after apply) 2026-03-13 00:02:32.406119 | orchestrator | + most_recent = true 2026-03-13 00:02:32.406124 | orchestrator | + name = (known after apply) 2026-03-13 00:02:32.406128 | orchestrator | + protected = (known after apply) 2026-03-13 00:02:32.406132 | orchestrator | + region = (known after apply) 2026-03-13 00:02:32.406136 | orchestrator | + schema = (known after apply) 2026-03-13 00:02:32.406140 | orchestrator | + size_bytes = (known after apply) 2026-03-13 00:02:32.406144 | orchestrator | + tags = (known after apply) 2026-03-13 00:02:32.406148 | orchestrator | + updated_at = (known after apply) 2026-03-13 00:02:32.406152 | orchestrator | } 2026-03-13 00:02:32.406158 | orchestrator | 2026-03-13 00:02:32.406163 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-03-13 00:02:32.406167 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-03-13 00:02:32.406172 | orchestrator | + content = (known after apply) 2026-03-13 00:02:32.406176 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-13 00:02:32.406180 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-13 00:02:32.406184 | orchestrator | + content_md5 = (known after apply) 2026-03-13 00:02:32.406188 | orchestrator | + content_sha1 = (known after apply) 2026-03-13 00:02:32.406193 | orchestrator | + content_sha256 = (known after apply) 2026-03-13 00:02:32.406197 | orchestrator | + content_sha512 = (known after apply) 2026-03-13 00:02:32.406201 | orchestrator | + directory_permission = "0777" 2026-03-13 00:02:32.406205 | orchestrator | + file_permission = "0644" 2026-03-13 00:02:32.406209 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-03-13 00:02:32.406213 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.406218 | orchestrator | } 2026-03-13 00:02:32.406223 | orchestrator | 2026-03-13 00:02:32.406228 | orchestrator | # local_file.id_rsa_pub will be created 2026-03-13 00:02:32.406232 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-03-13 00:02:32.406236 | orchestrator | + content = (known after apply) 2026-03-13 00:02:32.406240 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-13 00:02:32.406244 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-13 00:02:32.406248 | orchestrator | + content_md5 = (known after apply) 2026-03-13 00:02:32.406252 | orchestrator | + content_sha1 = (known after apply) 2026-03-13 00:02:32.406257 | orchestrator | + content_sha256 = (known after apply) 2026-03-13 00:02:32.406266 | orchestrator | + content_sha512 = (known after apply) 2026-03-13 00:02:32.406270 | orchestrator | + directory_permission = "0777" 2026-03-13 00:02:32.406275 | orchestrator | + file_permission = "0644" 2026-03-13 00:02:32.406284 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-03-13 00:02:32.406288 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.406292 | orchestrator | } 2026-03-13 00:02:32.406298 | orchestrator | 2026-03-13 00:02:32.406302 | orchestrator | # local_file.inventory will be created 2026-03-13 00:02:32.406306 | orchestrator | + resource "local_file" "inventory" { 2026-03-13 00:02:32.406311 | orchestrator | + content = (known after apply) 2026-03-13 00:02:32.406315 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-13 00:02:32.406319 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-13 00:02:32.406323 | orchestrator | + content_md5 = (known after apply) 2026-03-13 00:02:32.406327 | orchestrator | + content_sha1 = (known after apply) 2026-03-13 00:02:32.406332 | orchestrator | + content_sha256 = (known after apply) 2026-03-13 00:02:32.406336 | orchestrator | + content_sha512 = (known after apply) 2026-03-13 00:02:32.406340 | orchestrator | + directory_permission = "0777" 2026-03-13 00:02:32.406344 | orchestrator | + file_permission = "0644" 2026-03-13 00:02:32.406348 | orchestrator | + filename = "inventory.ci" 2026-03-13 00:02:32.406353 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.406357 | orchestrator | } 2026-03-13 00:02:32.406363 | orchestrator | 2026-03-13 00:02:32.406367 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-03-13 00:02:32.406371 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-03-13 00:02:32.406375 | orchestrator | + content = (sensitive value) 2026-03-13 00:02:32.406380 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-13 00:02:32.406384 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-13 00:02:32.406388 | orchestrator | + content_md5 = (known after apply) 2026-03-13 00:02:32.406392 | orchestrator | + content_sha1 = (known after apply) 2026-03-13 00:02:32.406396 | orchestrator | + content_sha256 = (known after apply) 2026-03-13 00:02:32.406400 | orchestrator | + content_sha512 = (known after apply) 2026-03-13 00:02:32.406405 | orchestrator | + directory_permission = "0700" 2026-03-13 00:02:32.406409 | orchestrator | + file_permission = "0600" 2026-03-13 00:02:32.406413 | orchestrator | + filename = ".id_rsa.ci" 2026-03-13 00:02:32.406417 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.406422 | orchestrator | } 2026-03-13 00:02:32.406427 | orchestrator | 2026-03-13 00:02:32.406431 | orchestrator | # null_resource.node_semaphore will be created 2026-03-13 00:02:32.406436 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-03-13 00:02:32.406440 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.406445 | orchestrator | } 2026-03-13 00:02:32.406450 | orchestrator | 2026-03-13 00:02:32.406454 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-03-13 00:02:32.406487 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-03-13 00:02:32.406493 | orchestrator | + attachment = (known after apply) 2026-03-13 00:02:32.406497 | orchestrator | + availability_zone = "nova" 2026-03-13 00:02:32.406501 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.406506 | orchestrator | + image_id = (known after apply) 2026-03-13 00:02:32.406510 | orchestrator | + metadata = (known after apply) 2026-03-13 00:02:32.406514 | orchestrator | + name = "testbed-volume-manager-base" 2026-03-13 00:02:32.406518 | orchestrator | + region = (known after apply) 2026-03-13 00:02:32.406522 | orchestrator | + size = 80 2026-03-13 00:02:32.406527 | orchestrator | + volume_retype_policy = "never" 2026-03-13 00:02:32.406531 | orchestrator | + volume_type = "ssd" 2026-03-13 00:02:32.406535 | orchestrator | } 2026-03-13 00:02:32.406541 | orchestrator | 2026-03-13 00:02:32.406545 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-03-13 00:02:32.406550 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-13 00:02:32.406554 | orchestrator | + attachment = (known after apply) 2026-03-13 00:02:32.406558 | orchestrator | + availability_zone = "nova" 2026-03-13 00:02:32.406562 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.406570 | orchestrator | + image_id = (known after apply) 2026-03-13 00:02:32.406575 | orchestrator | + metadata = (known after apply) 2026-03-13 00:02:32.406579 | orchestrator | + name = "testbed-volume-0-node-base" 2026-03-13 00:02:32.406583 | orchestrator | + region = (known after apply) 2026-03-13 00:02:32.406587 | orchestrator | + size = 80 2026-03-13 00:02:32.406592 | orchestrator | + volume_retype_policy = "never" 2026-03-13 00:02:32.406596 | orchestrator | + volume_type = "ssd" 2026-03-13 00:02:32.406600 | orchestrator | } 2026-03-13 00:02:32.406606 | orchestrator | 2026-03-13 00:02:32.406610 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-03-13 00:02:32.406614 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-13 00:02:32.406619 | orchestrator | + attachment = (known after apply) 2026-03-13 00:02:32.406623 | orchestrator | + availability_zone = "nova" 2026-03-13 00:02:32.406627 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.406631 | orchestrator | + image_id = (known after apply) 2026-03-13 00:02:32.406636 | orchestrator | + metadata = (known after apply) 2026-03-13 00:02:32.406640 | orchestrator | + name = "testbed-volume-1-node-base" 2026-03-13 00:02:32.406644 | orchestrator | + region = (known after apply) 2026-03-13 00:02:32.406648 | orchestrator | + size = 80 2026-03-13 00:02:32.406653 | orchestrator | + volume_retype_policy = "never" 2026-03-13 00:02:32.406657 | orchestrator | + volume_type = "ssd" 2026-03-13 00:02:32.406661 | orchestrator | } 2026-03-13 00:02:32.406667 | orchestrator | 2026-03-13 00:02:32.406671 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-03-13 00:02:32.406675 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-13 00:02:32.406679 | orchestrator | + attachment = (known after apply) 2026-03-13 00:02:32.406683 | orchestrator | + availability_zone = "nova" 2026-03-13 00:02:32.406688 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.406692 | orchestrator | + image_id = (known after apply) 2026-03-13 00:02:32.406696 | orchestrator | + metadata = (known after apply) 2026-03-13 00:02:32.406700 | orchestrator | + name = "testbed-volume-2-node-base" 2026-03-13 00:02:32.406704 | orchestrator | + region = (known after apply) 2026-03-13 00:02:32.406709 | orchestrator | + size = 80 2026-03-13 00:02:32.406716 | orchestrator | + volume_retype_policy = "never" 2026-03-13 00:02:32.406720 | orchestrator | + volume_type = "ssd" 2026-03-13 00:02:32.406724 | orchestrator | } 2026-03-13 00:02:32.406751 | orchestrator | 2026-03-13 00:02:32.406756 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-03-13 00:02:32.406760 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-13 00:02:32.406765 | orchestrator | + attachment = (known after apply) 2026-03-13 00:02:32.406769 | orchestrator | + availability_zone = "nova" 2026-03-13 00:02:32.406773 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.406777 | orchestrator | + image_id = (known after apply) 2026-03-13 00:02:32.406781 | orchestrator | + metadata = (known after apply) 2026-03-13 00:02:32.406785 | orchestrator | + name = "testbed-volume-3-node-base" 2026-03-13 00:02:32.406789 | orchestrator | + region = (known after apply) 2026-03-13 00:02:32.406793 | orchestrator | + size = 80 2026-03-13 00:02:32.406798 | orchestrator | + volume_retype_policy = "never" 2026-03-13 00:02:32.406802 | orchestrator | + volume_type = "ssd" 2026-03-13 00:02:32.406816 | orchestrator | } 2026-03-13 00:02:32.406822 | orchestrator | 2026-03-13 00:02:32.406826 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-03-13 00:02:32.406830 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-13 00:02:32.406834 | orchestrator | + attachment = (known after apply) 2026-03-13 00:02:32.406838 | orchestrator | + availability_zone = "nova" 2026-03-13 00:02:32.406843 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.406850 | orchestrator | + image_id = (known after apply) 2026-03-13 00:02:32.406855 | orchestrator | + metadata = (known after apply) 2026-03-13 00:02:32.406859 | orchestrator | + name = "testbed-volume-4-node-base" 2026-03-13 00:02:32.406863 | orchestrator | + region = (known after apply) 2026-03-13 00:02:32.406867 | orchestrator | + size = 80 2026-03-13 00:02:32.406871 | orchestrator | + volume_retype_policy = "never" 2026-03-13 00:02:32.406875 | orchestrator | + volume_type = "ssd" 2026-03-13 00:02:32.406880 | orchestrator | } 2026-03-13 00:02:32.406885 | orchestrator | 2026-03-13 00:02:32.406889 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-03-13 00:02:32.406894 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-13 00:02:32.406898 | orchestrator | + attachment = (known after apply) 2026-03-13 00:02:32.406902 | orchestrator | + availability_zone = "nova" 2026-03-13 00:02:32.406906 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.406910 | orchestrator | + image_id = (known after apply) 2026-03-13 00:02:32.406914 | orchestrator | + metadata = (known after apply) 2026-03-13 00:02:32.406918 | orchestrator | + name = "testbed-volume-5-node-base" 2026-03-13 00:02:32.406923 | orchestrator | + region = (known after apply) 2026-03-13 00:02:32.406927 | orchestrator | + size = 80 2026-03-13 00:02:32.406931 | orchestrator | + volume_retype_policy = "never" 2026-03-13 00:02:32.406935 | orchestrator | + volume_type = "ssd" 2026-03-13 00:02:32.406939 | orchestrator | } 2026-03-13 00:02:32.406945 | orchestrator | 2026-03-13 00:02:32.406949 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-03-13 00:02:32.406953 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-13 00:02:32.406958 | orchestrator | + attachment = (known after apply) 2026-03-13 00:02:32.406962 | orchestrator | + availability_zone = "nova" 2026-03-13 00:02:32.406966 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.406970 | orchestrator | + metadata = (known after apply) 2026-03-13 00:02:32.406974 | orchestrator | + name = "testbed-volume-0-node-3" 2026-03-13 00:02:32.406978 | orchestrator | + region = (known after apply) 2026-03-13 00:02:32.406983 | orchestrator | + size = 20 2026-03-13 00:02:32.406987 | orchestrator | + volume_retype_policy = "never" 2026-03-13 00:02:32.406991 | orchestrator | + volume_type = "ssd" 2026-03-13 00:02:32.406995 | orchestrator | } 2026-03-13 00:02:32.407015 | orchestrator | 2026-03-13 00:02:32.407020 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-03-13 00:02:32.407025 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-13 00:02:32.407029 | orchestrator | + attachment = (known after apply) 2026-03-13 00:02:32.407033 | orchestrator | + availability_zone = "nova" 2026-03-13 00:02:32.407037 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.407041 | orchestrator | + metadata = (known after apply) 2026-03-13 00:02:32.407045 | orchestrator | + name = "testbed-volume-1-node-4" 2026-03-13 00:02:32.407050 | orchestrator | + region = (known after apply) 2026-03-13 00:02:32.407054 | orchestrator | + size = 20 2026-03-13 00:02:32.407058 | orchestrator | + volume_retype_policy = "never" 2026-03-13 00:02:32.407062 | orchestrator | + volume_type = "ssd" 2026-03-13 00:02:32.407066 | orchestrator | } 2026-03-13 00:02:32.407081 | orchestrator | 2026-03-13 00:02:32.407086 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-03-13 00:02:32.407090 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-13 00:02:32.407094 | orchestrator | + attachment = (known after apply) 2026-03-13 00:02:32.407098 | orchestrator | + availability_zone = "nova" 2026-03-13 00:02:32.407102 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.407107 | orchestrator | + metadata = (known after apply) 2026-03-13 00:02:32.407111 | orchestrator | + name = "testbed-volume-2-node-5" 2026-03-13 00:02:32.407115 | orchestrator | + region = (known after apply) 2026-03-13 00:02:32.407122 | orchestrator | + size = 20 2026-03-13 00:02:32.407127 | orchestrator | + volume_retype_policy = "never" 2026-03-13 00:02:32.407131 | orchestrator | + volume_type = "ssd" 2026-03-13 00:02:32.407135 | orchestrator | } 2026-03-13 00:02:32.407155 | orchestrator | 2026-03-13 00:02:32.407160 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-03-13 00:02:32.407164 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-13 00:02:32.407168 | orchestrator | + attachment = (known after apply) 2026-03-13 00:02:32.407173 | orchestrator | + availability_zone = "nova" 2026-03-13 00:02:32.407177 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.407184 | orchestrator | + metadata = (known after apply) 2026-03-13 00:02:32.407188 | orchestrator | + name = "testbed-volume-3-node-3" 2026-03-13 00:02:32.407192 | orchestrator | + region = (known after apply) 2026-03-13 00:02:32.407196 | orchestrator | + size = 20 2026-03-13 00:02:32.407200 | orchestrator | + volume_retype_policy = "never" 2026-03-13 00:02:32.407204 | orchestrator | + volume_type = "ssd" 2026-03-13 00:02:32.407209 | orchestrator | } 2026-03-13 00:02:32.407223 | orchestrator | 2026-03-13 00:02:32.407228 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-03-13 00:02:32.407232 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-13 00:02:32.407236 | orchestrator | + attachment = (known after apply) 2026-03-13 00:02:32.407240 | orchestrator | + availability_zone = "nova" 2026-03-13 00:02:32.407244 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.407249 | orchestrator | + metadata = (known after apply) 2026-03-13 00:02:32.407253 | orchestrator | + name = "testbed-volume-4-node-4" 2026-03-13 00:02:32.407257 | orchestrator | + region = (known after apply) 2026-03-13 00:02:32.407261 | orchestrator | + size = 20 2026-03-13 00:02:32.407265 | orchestrator | + volume_retype_policy = "never" 2026-03-13 00:02:32.407269 | orchestrator | + volume_type = "ssd" 2026-03-13 00:02:32.407274 | orchestrator | } 2026-03-13 00:02:32.407279 | orchestrator | 2026-03-13 00:02:32.407283 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-03-13 00:02:32.407288 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-13 00:02:32.407292 | orchestrator | + attachment = (known after apply) 2026-03-13 00:02:32.407296 | orchestrator | + availability_zone = "nova" 2026-03-13 00:02:32.407300 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.407304 | orchestrator | + metadata = (known after apply) 2026-03-13 00:02:32.407308 | orchestrator | + name = "testbed-volume-5-node-5" 2026-03-13 00:02:32.407312 | orchestrator | + region = (known after apply) 2026-03-13 00:02:32.407317 | orchestrator | + size = 20 2026-03-13 00:02:32.407321 | orchestrator | + volume_retype_policy = "never" 2026-03-13 00:02:32.407325 | orchestrator | + volume_type = "ssd" 2026-03-13 00:02:32.407329 | orchestrator | } 2026-03-13 00:02:32.407351 | orchestrator | 2026-03-13 00:02:32.407357 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-03-13 00:02:32.407361 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-13 00:02:32.407365 | orchestrator | + attachment = (known after apply) 2026-03-13 00:02:32.407369 | orchestrator | + availability_zone = "nova" 2026-03-13 00:02:32.407373 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.407378 | orchestrator | + metadata = (known after apply) 2026-03-13 00:02:32.407382 | orchestrator | + name = "testbed-volume-6-node-3" 2026-03-13 00:02:32.407386 | orchestrator | + region = (known after apply) 2026-03-13 00:02:32.407390 | orchestrator | + size = 20 2026-03-13 00:02:32.407394 | orchestrator | + volume_retype_policy = "never" 2026-03-13 00:02:32.407398 | orchestrator | + volume_type = "ssd" 2026-03-13 00:02:32.407402 | orchestrator | } 2026-03-13 00:02:32.407417 | orchestrator | 2026-03-13 00:02:32.407422 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-03-13 00:02:32.407426 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-13 00:02:32.407434 | orchestrator | + attachment = (known after apply) 2026-03-13 00:02:32.407438 | orchestrator | + availability_zone = "nova" 2026-03-13 00:02:32.407442 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.407447 | orchestrator | + metadata = (known after apply) 2026-03-13 00:02:32.407451 | orchestrator | + name = "testbed-volume-7-node-4" 2026-03-13 00:02:32.407455 | orchestrator | + region = (known after apply) 2026-03-13 00:02:32.407469 | orchestrator | + size = 20 2026-03-13 00:02:32.407473 | orchestrator | + volume_retype_policy = "never" 2026-03-13 00:02:32.407477 | orchestrator | + volume_type = "ssd" 2026-03-13 00:02:32.407482 | orchestrator | } 2026-03-13 00:02:32.407487 | orchestrator | 2026-03-13 00:02:32.407492 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-03-13 00:02:32.407496 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-13 00:02:32.407500 | orchestrator | + attachment = (known after apply) 2026-03-13 00:02:32.407504 | orchestrator | + availability_zone = "nova" 2026-03-13 00:02:32.407509 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.407513 | orchestrator | + metadata = (known after apply) 2026-03-13 00:02:32.407517 | orchestrator | + name = "testbed-volume-8-node-5" 2026-03-13 00:02:32.407521 | orchestrator | + region = (known after apply) 2026-03-13 00:02:32.407525 | orchestrator | + size = 20 2026-03-13 00:02:32.407530 | orchestrator | + volume_retype_policy = "never" 2026-03-13 00:02:32.407534 | orchestrator | + volume_type = "ssd" 2026-03-13 00:02:32.407538 | orchestrator | } 2026-03-13 00:02:32.407727 | orchestrator | 2026-03-13 00:02:32.407733 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-03-13 00:02:32.407737 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-03-13 00:02:32.407741 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-13 00:02:32.407745 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-13 00:02:32.407749 | orchestrator | + all_metadata = (known after apply) 2026-03-13 00:02:32.407753 | orchestrator | + all_tags = (known after apply) 2026-03-13 00:02:32.407757 | orchestrator | + availability_zone = "nova" 2026-03-13 00:02:32.407762 | orchestrator | + config_drive = true 2026-03-13 00:02:32.407769 | orchestrator | + created = (known after apply) 2026-03-13 00:02:32.407782 | orchestrator | + flavor_id = (known after apply) 2026-03-13 00:02:32.407786 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-03-13 00:02:32.407790 | orchestrator | + force_delete = false 2026-03-13 00:02:32.407794 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-13 00:02:32.407799 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.407803 | orchestrator | + image_id = (known after apply) 2026-03-13 00:02:32.407807 | orchestrator | + image_name = (known after apply) 2026-03-13 00:02:32.407811 | orchestrator | + key_pair = "testbed" 2026-03-13 00:02:32.407815 | orchestrator | + name = "testbed-manager" 2026-03-13 00:02:32.407819 | orchestrator | + power_state = "active" 2026-03-13 00:02:32.407823 | orchestrator | + region = (known after apply) 2026-03-13 00:02:32.407828 | orchestrator | + security_groups = (known after apply) 2026-03-13 00:02:32.407832 | orchestrator | + stop_before_destroy = false 2026-03-13 00:02:32.407836 | orchestrator | + updated = (known after apply) 2026-03-13 00:02:32.407840 | orchestrator | + user_data = (sensitive value) 2026-03-13 00:02:32.407844 | orchestrator | 2026-03-13 00:02:32.407849 | orchestrator | + block_device { 2026-03-13 00:02:32.407853 | orchestrator | + boot_index = 0 2026-03-13 00:02:32.407857 | orchestrator | + delete_on_termination = false 2026-03-13 00:02:32.407861 | orchestrator | + destination_type = "volume" 2026-03-13 00:02:32.407865 | orchestrator | + multiattach = false 2026-03-13 00:02:32.407869 | orchestrator | + source_type = "volume" 2026-03-13 00:02:32.407874 | orchestrator | + uuid = (known after apply) 2026-03-13 00:02:32.407882 | orchestrator | } 2026-03-13 00:02:32.407886 | orchestrator | 2026-03-13 00:02:32.407890 | orchestrator | + network { 2026-03-13 00:02:32.407895 | orchestrator | + access_network = false 2026-03-13 00:02:32.407899 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-13 00:02:32.407903 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-13 00:02:32.407907 | orchestrator | + mac = (known after apply) 2026-03-13 00:02:32.407911 | orchestrator | + name = (known after apply) 2026-03-13 00:02:32.407915 | orchestrator | + port = (known after apply) 2026-03-13 00:02:32.407919 | orchestrator | + uuid = (known after apply) 2026-03-13 00:02:32.407924 | orchestrator | } 2026-03-13 00:02:32.407928 | orchestrator | } 2026-03-13 00:02:32.407964 | orchestrator | 2026-03-13 00:02:32.407970 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-03-13 00:02:32.407974 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-13 00:02:32.407978 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-13 00:02:32.407982 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-13 00:02:32.407987 | orchestrator | + all_metadata = (known after apply) 2026-03-13 00:02:32.407991 | orchestrator | + all_tags = (known after apply) 2026-03-13 00:02:32.407995 | orchestrator | + availability_zone = "nova" 2026-03-13 00:02:32.407999 | orchestrator | + config_drive = true 2026-03-13 00:02:32.408003 | orchestrator | + created = (known after apply) 2026-03-13 00:02:32.408025 | orchestrator | + flavor_id = (known after apply) 2026-03-13 00:02:32.408030 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-13 00:02:32.408034 | orchestrator | + force_delete = false 2026-03-13 00:02:32.408038 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-13 00:02:32.408042 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.408046 | orchestrator | + image_id = (known after apply) 2026-03-13 00:02:32.408050 | orchestrator | + image_name = (known after apply) 2026-03-13 00:02:32.408055 | orchestrator | + key_pair = "testbed" 2026-03-13 00:02:32.408059 | orchestrator | + name = "testbed-node-0" 2026-03-13 00:02:32.408063 | orchestrator | + power_state = "active" 2026-03-13 00:02:32.408067 | orchestrator | + region = (known after apply) 2026-03-13 00:02:32.408071 | orchestrator | + security_groups = (known after apply) 2026-03-13 00:02:32.408075 | orchestrator | + stop_before_destroy = false 2026-03-13 00:02:32.408079 | orchestrator | + updated = (known after apply) 2026-03-13 00:02:32.408083 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-13 00:02:32.408103 | orchestrator | 2026-03-13 00:02:32.408108 | orchestrator | + block_device { 2026-03-13 00:02:32.408112 | orchestrator | + boot_index = 0 2026-03-13 00:02:32.408116 | orchestrator | + delete_on_termination = false 2026-03-13 00:02:32.408120 | orchestrator | + destination_type = "volume" 2026-03-13 00:02:32.408124 | orchestrator | + multiattach = false 2026-03-13 00:02:32.408128 | orchestrator | + source_type = "volume" 2026-03-13 00:02:32.408132 | orchestrator | + uuid = (known after apply) 2026-03-13 00:02:32.408137 | orchestrator | } 2026-03-13 00:02:32.408141 | orchestrator | 2026-03-13 00:02:32.408145 | orchestrator | + network { 2026-03-13 00:02:32.408149 | orchestrator | + access_network = false 2026-03-13 00:02:32.408153 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-13 00:02:32.408157 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-13 00:02:32.408162 | orchestrator | + mac = (known after apply) 2026-03-13 00:02:32.408182 | orchestrator | + name = (known after apply) 2026-03-13 00:02:32.408186 | orchestrator | + port = (known after apply) 2026-03-13 00:02:32.408190 | orchestrator | + uuid = (known after apply) 2026-03-13 00:02:32.408194 | orchestrator | } 2026-03-13 00:02:32.408263 | orchestrator | } 2026-03-13 00:02:32.408270 | orchestrator | 2026-03-13 00:02:32.408275 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-03-13 00:02:32.408279 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-13 00:02:32.408283 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-13 00:02:32.408291 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-13 00:02:32.408296 | orchestrator | + all_metadata = (known after apply) 2026-03-13 00:02:32.408300 | orchestrator | + all_tags = (known after apply) 2026-03-13 00:02:32.408304 | orchestrator | + availability_zone = "nova" 2026-03-13 00:02:32.408308 | orchestrator | + config_drive = true 2026-03-13 00:02:32.408312 | orchestrator | + created = (known after apply) 2026-03-13 00:02:32.408316 | orchestrator | + flavor_id = (known after apply) 2026-03-13 00:02:32.408320 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-13 00:02:32.408365 | orchestrator | + force_delete = false 2026-03-13 00:02:32.408370 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-13 00:02:32.408374 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.408378 | orchestrator | + image_id = (known after apply) 2026-03-13 00:02:32.408382 | orchestrator | + image_name = (known after apply) 2026-03-13 00:02:32.408386 | orchestrator | + key_pair = "testbed" 2026-03-13 00:02:32.408390 | orchestrator | + name = "testbed-node-1" 2026-03-13 00:02:32.408394 | orchestrator | + power_state = "active" 2026-03-13 00:02:32.408399 | orchestrator | + region = (known after apply) 2026-03-13 00:02:32.408419 | orchestrator | + security_groups = (known after apply) 2026-03-13 00:02:32.408423 | orchestrator | + stop_before_destroy = false 2026-03-13 00:02:32.408427 | orchestrator | + updated = (known after apply) 2026-03-13 00:02:32.408435 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-13 00:02:32.408439 | orchestrator | 2026-03-13 00:02:32.408443 | orchestrator | + block_device { 2026-03-13 00:02:32.408448 | orchestrator | + boot_index = 0 2026-03-13 00:02:32.408452 | orchestrator | + delete_on_termination = false 2026-03-13 00:02:32.408456 | orchestrator | + destination_type = "volume" 2026-03-13 00:02:32.408471 | orchestrator | + multiattach = false 2026-03-13 00:02:32.408476 | orchestrator | + source_type = "volume" 2026-03-13 00:02:32.408498 | orchestrator | + uuid = (known after apply) 2026-03-13 00:02:32.408503 | orchestrator | } 2026-03-13 00:02:32.408507 | orchestrator | 2026-03-13 00:02:32.408511 | orchestrator | + network { 2026-03-13 00:02:32.408515 | orchestrator | + access_network = false 2026-03-13 00:02:32.408519 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-13 00:02:32.408523 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-13 00:02:32.408527 | orchestrator | + mac = (known after apply) 2026-03-13 00:02:32.408531 | orchestrator | + name = (known after apply) 2026-03-13 00:02:32.408535 | orchestrator | + port = (known after apply) 2026-03-13 00:02:32.408539 | orchestrator | + uuid = (known after apply) 2026-03-13 00:02:32.408543 | orchestrator | } 2026-03-13 00:02:32.408547 | orchestrator | } 2026-03-13 00:02:32.408554 | orchestrator | 2026-03-13 00:02:32.408573 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-03-13 00:02:32.408578 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-13 00:02:32.408582 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-13 00:02:32.408586 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-13 00:02:32.408591 | orchestrator | + all_metadata = (known after apply) 2026-03-13 00:02:32.408595 | orchestrator | + all_tags = (known after apply) 2026-03-13 00:02:32.408599 | orchestrator | + availability_zone = "nova" 2026-03-13 00:02:32.408603 | orchestrator | + config_drive = true 2026-03-13 00:02:32.408607 | orchestrator | + created = (known after apply) 2026-03-13 00:02:32.408612 | orchestrator | + flavor_id = (known after apply) 2026-03-13 00:02:32.408616 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-13 00:02:32.408620 | orchestrator | + force_delete = false 2026-03-13 00:02:32.408624 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-13 00:02:32.408628 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.408632 | orchestrator | + image_id = (known after apply) 2026-03-13 00:02:32.408656 | orchestrator | + image_name = (known after apply) 2026-03-13 00:02:32.408661 | orchestrator | + key_pair = "testbed" 2026-03-13 00:02:32.408665 | orchestrator | + name = "testbed-node-2" 2026-03-13 00:02:32.408669 | orchestrator | + power_state = "active" 2026-03-13 00:02:32.408673 | orchestrator | + region = (known after apply) 2026-03-13 00:02:32.408677 | orchestrator | + security_groups = (known after apply) 2026-03-13 00:02:32.408681 | orchestrator | + stop_before_destroy = false 2026-03-13 00:02:32.408685 | orchestrator | + updated = (known after apply) 2026-03-13 00:02:32.408689 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-13 00:02:32.408693 | orchestrator | 2026-03-13 00:02:32.408697 | orchestrator | + block_device { 2026-03-13 00:02:32.408701 | orchestrator | + boot_index = 0 2026-03-13 00:02:32.408705 | orchestrator | + delete_on_termination = false 2026-03-13 00:02:32.408709 | orchestrator | + destination_type = "volume" 2026-03-13 00:02:32.408714 | orchestrator | + multiattach = false 2026-03-13 00:02:32.408734 | orchestrator | + source_type = "volume" 2026-03-13 00:02:32.408738 | orchestrator | + uuid = (known after apply) 2026-03-13 00:02:32.408743 | orchestrator | } 2026-03-13 00:02:32.408747 | orchestrator | 2026-03-13 00:02:32.408751 | orchestrator | + network { 2026-03-13 00:02:32.408755 | orchestrator | + access_network = false 2026-03-13 00:02:32.408759 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-13 00:02:32.408763 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-13 00:02:32.408767 | orchestrator | + mac = (known after apply) 2026-03-13 00:02:32.408771 | orchestrator | + name = (known after apply) 2026-03-13 00:02:32.408775 | orchestrator | + port = (known after apply) 2026-03-13 00:02:32.408779 | orchestrator | + uuid = (known after apply) 2026-03-13 00:02:32.408783 | orchestrator | } 2026-03-13 00:02:32.408787 | orchestrator | } 2026-03-13 00:02:32.408793 | orchestrator | 2026-03-13 00:02:32.408822 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-03-13 00:02:32.408829 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-13 00:02:32.408836 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-13 00:02:32.408843 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-13 00:02:32.408851 | orchestrator | + all_metadata = (known after apply) 2026-03-13 00:02:32.408857 | orchestrator | + all_tags = (known after apply) 2026-03-13 00:02:32.408864 | orchestrator | + availability_zone = "nova" 2026-03-13 00:02:32.408870 | orchestrator | + config_drive = true 2026-03-13 00:02:32.408898 | orchestrator | + created = (known after apply) 2026-03-13 00:02:32.408906 | orchestrator | + flavor_id = (known after apply) 2026-03-13 00:02:32.408912 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-13 00:02:32.408919 | orchestrator | + force_delete = false 2026-03-13 00:02:32.408926 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-13 00:02:32.408932 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.408939 | orchestrator | + image_id = (known after apply) 2026-03-13 00:02:32.408945 | orchestrator | + image_name = (known after apply) 2026-03-13 00:02:32.408952 | orchestrator | + key_pair = "testbed" 2026-03-13 00:02:32.408983 | orchestrator | + name = "testbed-node-3" 2026-03-13 00:02:32.408990 | orchestrator | + power_state = "active" 2026-03-13 00:02:32.408994 | orchestrator | + region = (known after apply) 2026-03-13 00:02:32.408999 | orchestrator | + security_groups = (known after apply) 2026-03-13 00:02:32.409003 | orchestrator | + stop_before_destroy = false 2026-03-13 00:02:32.409007 | orchestrator | + updated = (known after apply) 2026-03-13 00:02:32.409011 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-13 00:02:32.409015 | orchestrator | 2026-03-13 00:02:32.409019 | orchestrator | + block_device { 2026-03-13 00:02:32.409023 | orchestrator | + boot_index = 0 2026-03-13 00:02:32.409027 | orchestrator | + delete_on_termination = false 2026-03-13 00:02:32.409031 | orchestrator | + destination_type = "volume" 2026-03-13 00:02:32.409058 | orchestrator | + multiattach = false 2026-03-13 00:02:32.409062 | orchestrator | + source_type = "volume" 2026-03-13 00:02:32.409066 | orchestrator | + uuid = (known after apply) 2026-03-13 00:02:32.409071 | orchestrator | } 2026-03-13 00:02:32.409075 | orchestrator | 2026-03-13 00:02:32.409079 | orchestrator | + network { 2026-03-13 00:02:32.409083 | orchestrator | + access_network = false 2026-03-13 00:02:32.409087 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-13 00:02:32.409091 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-13 00:02:32.409095 | orchestrator | + mac = (known after apply) 2026-03-13 00:02:32.409099 | orchestrator | + name = (known after apply) 2026-03-13 00:02:32.409103 | orchestrator | + port = (known after apply) 2026-03-13 00:02:32.409108 | orchestrator | + uuid = (known after apply) 2026-03-13 00:02:32.409112 | orchestrator | } 2026-03-13 00:02:32.409116 | orchestrator | } 2026-03-13 00:02:32.409139 | orchestrator | 2026-03-13 00:02:32.409144 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-03-13 00:02:32.409148 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-13 00:02:32.409153 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-13 00:02:32.409157 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-13 00:02:32.409161 | orchestrator | + all_metadata = (known after apply) 2026-03-13 00:02:32.409165 | orchestrator | + all_tags = (known after apply) 2026-03-13 00:02:32.409169 | orchestrator | + availability_zone = "nova" 2026-03-13 00:02:32.409173 | orchestrator | + config_drive = true 2026-03-13 00:02:32.409177 | orchestrator | + created = (known after apply) 2026-03-13 00:02:32.409181 | orchestrator | + flavor_id = (known after apply) 2026-03-13 00:02:32.409185 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-13 00:02:32.409189 | orchestrator | + force_delete = false 2026-03-13 00:02:32.409193 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-13 00:02:32.409215 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.409220 | orchestrator | + image_id = (known after apply) 2026-03-13 00:02:32.409224 | orchestrator | + image_name = (known after apply) 2026-03-13 00:02:32.409228 | orchestrator | + key_pair = "testbed" 2026-03-13 00:02:32.409232 | orchestrator | + name = "testbed-node-4" 2026-03-13 00:02:32.409236 | orchestrator | + power_state = "active" 2026-03-13 00:02:32.409240 | orchestrator | + region = (known after apply) 2026-03-13 00:02:32.409244 | orchestrator | + security_groups = (known after apply) 2026-03-13 00:02:32.409248 | orchestrator | + stop_before_destroy = false 2026-03-13 00:02:32.409252 | orchestrator | + updated = (known after apply) 2026-03-13 00:02:32.409257 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-13 00:02:32.409261 | orchestrator | 2026-03-13 00:02:32.409265 | orchestrator | + block_device { 2026-03-13 00:02:32.409269 | orchestrator | + boot_index = 0 2026-03-13 00:02:32.409273 | orchestrator | + delete_on_termination = false 2026-03-13 00:02:32.409295 | orchestrator | + destination_type = "volume" 2026-03-13 00:02:32.409299 | orchestrator | + multiattach = false 2026-03-13 00:02:32.409303 | orchestrator | + source_type = "volume" 2026-03-13 00:02:32.409307 | orchestrator | + uuid = (known after apply) 2026-03-13 00:02:32.409311 | orchestrator | } 2026-03-13 00:02:32.409316 | orchestrator | 2026-03-13 00:02:32.409320 | orchestrator | + network { 2026-03-13 00:02:32.409324 | orchestrator | + access_network = false 2026-03-13 00:02:32.409328 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-13 00:02:32.409332 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-13 00:02:32.409336 | orchestrator | + mac = (known after apply) 2026-03-13 00:02:32.409340 | orchestrator | + name = (known after apply) 2026-03-13 00:02:32.409344 | orchestrator | + port = (known after apply) 2026-03-13 00:02:32.409348 | orchestrator | + uuid = (known after apply) 2026-03-13 00:02:32.409352 | orchestrator | } 2026-03-13 00:02:32.409373 | orchestrator | } 2026-03-13 00:02:32.409381 | orchestrator | 2026-03-13 00:02:32.409385 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-03-13 00:02:32.409389 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-13 00:02:32.409393 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-13 00:02:32.409397 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-13 00:02:32.409402 | orchestrator | + all_metadata = (known after apply) 2026-03-13 00:02:32.409406 | orchestrator | + all_tags = (known after apply) 2026-03-13 00:02:32.409410 | orchestrator | + availability_zone = "nova" 2026-03-13 00:02:32.409414 | orchestrator | + config_drive = true 2026-03-13 00:02:32.409418 | orchestrator | + created = (known after apply) 2026-03-13 00:02:32.409422 | orchestrator | + flavor_id = (known after apply) 2026-03-13 00:02:32.409426 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-13 00:02:32.409430 | orchestrator | + force_delete = false 2026-03-13 00:02:32.409449 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-13 00:02:32.409454 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.409458 | orchestrator | + image_id = (known after apply) 2026-03-13 00:02:32.409551 | orchestrator | + image_name = (known after apply) 2026-03-13 00:02:32.409556 | orchestrator | + key_pair = "testbed" 2026-03-13 00:02:32.409560 | orchestrator | + name = "testbed-node-5" 2026-03-13 00:02:32.409564 | orchestrator | + power_state = "active" 2026-03-13 00:02:32.409568 | orchestrator | + region = (known after apply) 2026-03-13 00:02:32.409572 | orchestrator | + security_groups = (known after apply) 2026-03-13 00:02:32.409576 | orchestrator | + stop_before_destroy = false 2026-03-13 00:02:32.409580 | orchestrator | + updated = (known after apply) 2026-03-13 00:02:32.409584 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-13 00:02:32.409588 | orchestrator | 2026-03-13 00:02:32.409592 | orchestrator | + block_device { 2026-03-13 00:02:32.409596 | orchestrator | + boot_index = 0 2026-03-13 00:02:32.409601 | orchestrator | + delete_on_termination = false 2026-03-13 00:02:32.409605 | orchestrator | + destination_type = "volume" 2026-03-13 00:02:32.409625 | orchestrator | + multiattach = false 2026-03-13 00:02:32.409629 | orchestrator | + source_type = "volume" 2026-03-13 00:02:32.409634 | orchestrator | + uuid = (known after apply) 2026-03-13 00:02:32.409638 | orchestrator | } 2026-03-13 00:02:32.409642 | orchestrator | 2026-03-13 00:02:32.409646 | orchestrator | + network { 2026-03-13 00:02:32.409650 | orchestrator | + access_network = false 2026-03-13 00:02:32.409654 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-13 00:02:32.409658 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-13 00:02:32.409663 | orchestrator | + mac = (known after apply) 2026-03-13 00:02:32.409667 | orchestrator | + name = (known after apply) 2026-03-13 00:02:32.409671 | orchestrator | + port = (known after apply) 2026-03-13 00:02:32.409675 | orchestrator | + uuid = (known after apply) 2026-03-13 00:02:32.409679 | orchestrator | } 2026-03-13 00:02:32.409683 | orchestrator | } 2026-03-13 00:02:32.409706 | orchestrator | 2026-03-13 00:02:32.409712 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-03-13 00:02:32.409716 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-03-13 00:02:32.409720 | orchestrator | + fingerprint = (known after apply) 2026-03-13 00:02:32.409724 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.409728 | orchestrator | + name = "testbed" 2026-03-13 00:02:32.409732 | orchestrator | + private_key = (sensitive value) 2026-03-13 00:02:32.409736 | orchestrator | + public_key = (known after apply) 2026-03-13 00:02:32.409741 | orchestrator | + region = (known after apply) 2026-03-13 00:02:32.409745 | orchestrator | + user_id = (known after apply) 2026-03-13 00:02:32.409749 | orchestrator | } 2026-03-13 00:02:32.409753 | orchestrator | 2026-03-13 00:02:32.409757 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-03-13 00:02:32.409761 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-13 00:02:32.409785 | orchestrator | + device = (known after apply) 2026-03-13 00:02:32.409789 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.409793 | orchestrator | + instance_id = (known after apply) 2026-03-13 00:02:32.409797 | orchestrator | + region = (known after apply) 2026-03-13 00:02:32.409804 | orchestrator | + volume_id = (known after apply) 2026-03-13 00:02:32.409807 | orchestrator | } 2026-03-13 00:02:32.409811 | orchestrator | 2026-03-13 00:02:32.409815 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-03-13 00:02:32.409819 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-13 00:02:32.409822 | orchestrator | + device = (known after apply) 2026-03-13 00:02:32.409826 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.409830 | orchestrator | + instance_id = (known after apply) 2026-03-13 00:02:32.409833 | orchestrator | + region = (known after apply) 2026-03-13 00:02:32.409837 | orchestrator | + volume_id = (known after apply) 2026-03-13 00:02:32.409841 | orchestrator | } 2026-03-13 00:02:32.409860 | orchestrator | 2026-03-13 00:02:32.409864 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-03-13 00:02:32.409868 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-13 00:02:32.409872 | orchestrator | + device = (known after apply) 2026-03-13 00:02:32.409876 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.409879 | orchestrator | + instance_id = (known after apply) 2026-03-13 00:02:32.409883 | orchestrator | + region = (known after apply) 2026-03-13 00:02:32.409887 | orchestrator | + volume_id = (known after apply) 2026-03-13 00:02:32.409891 | orchestrator | } 2026-03-13 00:02:32.409895 | orchestrator | 2026-03-13 00:02:32.409898 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-03-13 00:02:32.409902 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-13 00:02:32.409906 | orchestrator | + device = (known after apply) 2026-03-13 00:02:32.409910 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.409914 | orchestrator | + instance_id = (known after apply) 2026-03-13 00:02:32.409917 | orchestrator | + region = (known after apply) 2026-03-13 00:02:32.409937 | orchestrator | + volume_id = (known after apply) 2026-03-13 00:02:32.409942 | orchestrator | } 2026-03-13 00:02:32.409946 | orchestrator | 2026-03-13 00:02:32.409950 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-03-13 00:02:32.409953 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-13 00:02:32.409957 | orchestrator | + device = (known after apply) 2026-03-13 00:02:32.409961 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.409964 | orchestrator | + instance_id = (known after apply) 2026-03-13 00:02:32.409968 | orchestrator | + region = (known after apply) 2026-03-13 00:02:32.409972 | orchestrator | + volume_id = (known after apply) 2026-03-13 00:02:32.409975 | orchestrator | } 2026-03-13 00:02:32.409979 | orchestrator | 2026-03-13 00:02:32.409983 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-03-13 00:02:32.409987 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-13 00:02:32.409990 | orchestrator | + device = (known after apply) 2026-03-13 00:02:32.409994 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.409998 | orchestrator | + instance_id = (known after apply) 2026-03-13 00:02:32.410029 | orchestrator | + region = (known after apply) 2026-03-13 00:02:32.410033 | orchestrator | + volume_id = (known after apply) 2026-03-13 00:02:32.410037 | orchestrator | } 2026-03-13 00:02:32.410040 | orchestrator | 2026-03-13 00:02:32.410044 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-03-13 00:02:32.410048 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-13 00:02:32.410051 | orchestrator | + device = (known after apply) 2026-03-13 00:02:32.410055 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.410059 | orchestrator | + instance_id = (known after apply) 2026-03-13 00:02:32.410063 | orchestrator | + region = (known after apply) 2026-03-13 00:02:32.410104 | orchestrator | + volume_id = (known after apply) 2026-03-13 00:02:32.410109 | orchestrator | } 2026-03-13 00:02:32.410112 | orchestrator | 2026-03-13 00:02:32.410116 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-03-13 00:02:32.410120 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-13 00:02:32.410124 | orchestrator | + device = (known after apply) 2026-03-13 00:02:32.410127 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.410131 | orchestrator | + instance_id = (known after apply) 2026-03-13 00:02:32.410135 | orchestrator | + region = (known after apply) 2026-03-13 00:02:32.410139 | orchestrator | + volume_id = (known after apply) 2026-03-13 00:02:32.410143 | orchestrator | } 2026-03-13 00:02:32.410174 | orchestrator | 2026-03-13 00:02:32.410179 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-03-13 00:02:32.410183 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-13 00:02:32.410186 | orchestrator | + device = (known after apply) 2026-03-13 00:02:32.410190 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.410194 | orchestrator | + instance_id = (known after apply) 2026-03-13 00:02:32.410198 | orchestrator | + region = (known after apply) 2026-03-13 00:02:32.410202 | orchestrator | + volume_id = (known after apply) 2026-03-13 00:02:32.410205 | orchestrator | } 2026-03-13 00:02:32.410209 | orchestrator | 2026-03-13 00:02:32.410213 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-03-13 00:02:32.410217 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-03-13 00:02:32.410221 | orchestrator | + fixed_ip = (known after apply) 2026-03-13 00:02:32.410225 | orchestrator | + floating_ip = (known after apply) 2026-03-13 00:02:32.410244 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.410252 | orchestrator | + port_id = (known after apply) 2026-03-13 00:02:32.410257 | orchestrator | + region = (known after apply) 2026-03-13 00:02:32.410260 | orchestrator | } 2026-03-13 00:02:32.410264 | orchestrator | 2026-03-13 00:02:32.410268 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-03-13 00:02:32.410272 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-03-13 00:02:32.410276 | orchestrator | + address = (known after apply) 2026-03-13 00:02:32.410279 | orchestrator | + all_tags = (known after apply) 2026-03-13 00:02:32.410286 | orchestrator | + dns_domain = (known after apply) 2026-03-13 00:02:32.410290 | orchestrator | + dns_name = (known after apply) 2026-03-13 00:02:32.410293 | orchestrator | + fixed_ip = (known after apply) 2026-03-13 00:02:32.410297 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.410301 | orchestrator | + pool = "public" 2026-03-13 00:02:32.410322 | orchestrator | + port_id = (known after apply) 2026-03-13 00:02:32.410326 | orchestrator | + region = (known after apply) 2026-03-13 00:02:32.410330 | orchestrator | + subnet_id = (known after apply) 2026-03-13 00:02:32.410334 | orchestrator | + tenant_id = (known after apply) 2026-03-13 00:02:32.410337 | orchestrator | } 2026-03-13 00:02:32.410341 | orchestrator | 2026-03-13 00:02:32.410345 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-03-13 00:02:32.410349 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-03-13 00:02:32.410353 | orchestrator | + admin_state_up = (known after apply) 2026-03-13 00:02:32.410356 | orchestrator | + all_tags = (known after apply) 2026-03-13 00:02:32.410360 | orchestrator | + availability_zone_hints = [ 2026-03-13 00:02:32.410364 | orchestrator | + "nova", 2026-03-13 00:02:32.410368 | orchestrator | ] 2026-03-13 00:02:32.410371 | orchestrator | + dns_domain = (known after apply) 2026-03-13 00:02:32.410375 | orchestrator | + external = (known after apply) 2026-03-13 00:02:32.410379 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.410397 | orchestrator | + mtu = (known after apply) 2026-03-13 00:02:32.410401 | orchestrator | + name = "net-testbed-management" 2026-03-13 00:02:32.410405 | orchestrator | + port_security_enabled = (known after apply) 2026-03-13 00:02:32.410412 | orchestrator | + qos_policy_id = (known after apply) 2026-03-13 00:02:32.410416 | orchestrator | + region = (known after apply) 2026-03-13 00:02:32.410420 | orchestrator | + shared = (known after apply) 2026-03-13 00:02:32.410423 | orchestrator | + tenant_id = (known after apply) 2026-03-13 00:02:32.410427 | orchestrator | + transparent_vlan = (known after apply) 2026-03-13 00:02:32.410431 | orchestrator | 2026-03-13 00:02:32.410435 | orchestrator | + segments (known after apply) 2026-03-13 00:02:32.410439 | orchestrator | } 2026-03-13 00:02:32.410442 | orchestrator | 2026-03-13 00:02:32.410446 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-03-13 00:02:32.410450 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-03-13 00:02:32.410454 | orchestrator | + admin_state_up = (known after apply) 2026-03-13 00:02:32.410458 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-13 00:02:32.410486 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-13 00:02:32.410490 | orchestrator | + all_tags = (known after apply) 2026-03-13 00:02:32.410493 | orchestrator | + device_id = (known after apply) 2026-03-13 00:02:32.410497 | orchestrator | + device_owner = (known after apply) 2026-03-13 00:02:32.410501 | orchestrator | + dns_assignment = (known after apply) 2026-03-13 00:02:32.410505 | orchestrator | + dns_name = (known after apply) 2026-03-13 00:02:32.410508 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.410512 | orchestrator | + mac_address = (known after apply) 2026-03-13 00:02:32.410516 | orchestrator | + network_id = (known after apply) 2026-03-13 00:02:32.410520 | orchestrator | + port_security_enabled = (known after apply) 2026-03-13 00:02:32.410523 | orchestrator | + qos_policy_id = (known after apply) 2026-03-13 00:02:32.410527 | orchestrator | + region = (known after apply) 2026-03-13 00:02:32.410531 | orchestrator | + security_group_ids = (known after apply) 2026-03-13 00:02:32.410535 | orchestrator | + tenant_id = (known after apply) 2026-03-13 00:02:32.410539 | orchestrator | 2026-03-13 00:02:32.410542 | orchestrator | + allowed_address_pairs { 2026-03-13 00:02:32.410561 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-13 00:02:32.410566 | orchestrator | } 2026-03-13 00:02:32.410569 | orchestrator | 2026-03-13 00:02:32.410573 | orchestrator | + binding (known after apply) 2026-03-13 00:02:32.410577 | orchestrator | 2026-03-13 00:02:32.410581 | orchestrator | + fixed_ip { 2026-03-13 00:02:32.410585 | orchestrator | + ip_address = "192.168.16.5" 2026-03-13 00:02:32.410588 | orchestrator | + subnet_id = (known after apply) 2026-03-13 00:02:32.410592 | orchestrator | } 2026-03-13 00:02:32.410596 | orchestrator | } 2026-03-13 00:02:32.410600 | orchestrator | 2026-03-13 00:02:32.410603 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-03-13 00:02:32.410607 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-13 00:02:32.410611 | orchestrator | + admin_state_up = (known after apply) 2026-03-13 00:02:32.410615 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-13 00:02:32.410619 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-13 00:02:32.410637 | orchestrator | + all_tags = (known after apply) 2026-03-13 00:02:32.410641 | orchestrator | + device_id = (known after apply) 2026-03-13 00:02:32.410645 | orchestrator | + device_owner = (known after apply) 2026-03-13 00:02:32.410649 | orchestrator | + dns_assignment = (known after apply) 2026-03-13 00:02:32.410653 | orchestrator | + dns_name = (known after apply) 2026-03-13 00:02:32.410656 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.410660 | orchestrator | + mac_address = (known after apply) 2026-03-13 00:02:32.410664 | orchestrator | + network_id = (known after apply) 2026-03-13 00:02:32.410668 | orchestrator | + port_security_enabled = (known after apply) 2026-03-13 00:02:32.410671 | orchestrator | + qos_policy_id = (known after apply) 2026-03-13 00:02:32.410675 | orchestrator | + region = (known after apply) 2026-03-13 00:02:32.410683 | orchestrator | + security_group_ids = (known after apply) 2026-03-13 00:02:32.410686 | orchestrator | + tenant_id = (known after apply) 2026-03-13 00:02:32.410690 | orchestrator | 2026-03-13 00:02:32.410694 | orchestrator | + allowed_address_pairs { 2026-03-13 00:02:32.410698 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-13 00:02:32.410718 | orchestrator | } 2026-03-13 00:02:32.410722 | orchestrator | + allowed_address_pairs { 2026-03-13 00:02:32.410726 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-13 00:02:32.410730 | orchestrator | } 2026-03-13 00:02:32.410733 | orchestrator | + allowed_address_pairs { 2026-03-13 00:02:32.410740 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-13 00:02:32.410744 | orchestrator | } 2026-03-13 00:02:32.410748 | orchestrator | 2026-03-13 00:02:32.410752 | orchestrator | + binding (known after apply) 2026-03-13 00:02:32.410756 | orchestrator | 2026-03-13 00:02:32.410759 | orchestrator | + fixed_ip { 2026-03-13 00:02:32.410763 | orchestrator | + ip_address = "192.168.16.10" 2026-03-13 00:02:32.410767 | orchestrator | + subnet_id = (known after apply) 2026-03-13 00:02:32.410771 | orchestrator | } 2026-03-13 00:02:32.410774 | orchestrator | } 2026-03-13 00:02:32.410778 | orchestrator | 2026-03-13 00:02:32.410798 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-03-13 00:02:32.410802 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-13 00:02:32.410809 | orchestrator | + admin_state_up = (known after apply) 2026-03-13 00:02:32.410813 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-13 00:02:32.410817 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-13 00:02:32.410821 | orchestrator | + all_tags = (known after apply) 2026-03-13 00:02:32.410825 | orchestrator | + device_id = (known after apply) 2026-03-13 00:02:32.410829 | orchestrator | + device_owner = (known after apply) 2026-03-13 00:02:32.410833 | orchestrator | + dns_assignment = (known after apply) 2026-03-13 00:02:32.410836 | orchestrator | + dns_name = (known after apply) 2026-03-13 00:02:32.410840 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.410844 | orchestrator | + mac_address = (known after apply) 2026-03-13 00:02:32.410849 | orchestrator | + network_id = (known after apply) 2026-03-13 00:02:32.410853 | orchestrator | + port_security_enabled = (known after apply) 2026-03-13 00:02:32.410857 | orchestrator | + qos_policy_id = (known after apply) 2026-03-13 00:02:32.410861 | orchestrator | + region = (known after apply) 2026-03-13 00:02:32.410865 | orchestrator | + security_group_ids = (known after apply) 2026-03-13 00:02:32.410869 | orchestrator | + tenant_id = (known after apply) 2026-03-13 00:02:32.410872 | orchestrator | 2026-03-13 00:02:32.410876 | orchestrator | + allowed_address_pairs { 2026-03-13 00:02:32.410895 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-13 00:02:32.410899 | orchestrator | } 2026-03-13 00:02:32.410903 | orchestrator | + allowed_address_pairs { 2026-03-13 00:02:32.410907 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-13 00:02:32.410910 | orchestrator | } 2026-03-13 00:02:32.410914 | orchestrator | + allowed_address_pairs { 2026-03-13 00:02:32.410918 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-13 00:02:32.410922 | orchestrator | } 2026-03-13 00:02:32.410926 | orchestrator | 2026-03-13 00:02:32.410929 | orchestrator | + binding (known after apply) 2026-03-13 00:02:32.410933 | orchestrator | 2026-03-13 00:02:32.410937 | orchestrator | + fixed_ip { 2026-03-13 00:02:32.410941 | orchestrator | + ip_address = "192.168.16.11" 2026-03-13 00:02:32.410945 | orchestrator | + subnet_id = (known after apply) 2026-03-13 00:02:32.410949 | orchestrator | } 2026-03-13 00:02:32.410952 | orchestrator | } 2026-03-13 00:02:32.410956 | orchestrator | 2026-03-13 00:02:32.410960 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-03-13 00:02:32.410964 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-13 00:02:32.410968 | orchestrator | + admin_state_up = (known after apply) 2026-03-13 00:02:32.410972 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-13 00:02:32.410975 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-13 00:02:32.410979 | orchestrator | + all_tags = (known after apply) 2026-03-13 00:02:32.410987 | orchestrator | + device_id = (known after apply) 2026-03-13 00:02:32.410991 | orchestrator | + device_owner = (known after apply) 2026-03-13 00:02:32.410995 | orchestrator | + dns_assignment = (known after apply) 2026-03-13 00:02:32.410999 | orchestrator | + dns_name = (known after apply) 2026-03-13 00:02:32.411002 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.411006 | orchestrator | + mac_address = (known after apply) 2026-03-13 00:02:32.411010 | orchestrator | + network_id = (known after apply) 2026-03-13 00:02:32.411014 | orchestrator | + port_security_enabled = (known after apply) 2026-03-13 00:02:32.411034 | orchestrator | + qos_policy_id = (known after apply) 2026-03-13 00:02:32.411038 | orchestrator | + region = (known after apply) 2026-03-13 00:02:32.411042 | orchestrator | + security_group_ids = (known after apply) 2026-03-13 00:02:32.411045 | orchestrator | + tenant_id = (known after apply) 2026-03-13 00:02:32.411049 | orchestrator | 2026-03-13 00:02:32.411053 | orchestrator | + allowed_address_pairs { 2026-03-13 00:02:32.411057 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-13 00:02:32.411061 | orchestrator | } 2026-03-13 00:02:32.411064 | orchestrator | + allowed_address_pairs { 2026-03-13 00:02:32.411068 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-13 00:02:32.411072 | orchestrator | } 2026-03-13 00:02:32.411076 | orchestrator | + allowed_address_pairs { 2026-03-13 00:02:32.411080 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-13 00:02:32.411083 | orchestrator | } 2026-03-13 00:02:32.411087 | orchestrator | 2026-03-13 00:02:32.411091 | orchestrator | + binding (known after apply) 2026-03-13 00:02:32.411095 | orchestrator | 2026-03-13 00:02:32.411099 | orchestrator | + fixed_ip { 2026-03-13 00:02:32.411102 | orchestrator | + ip_address = "192.168.16.12" 2026-03-13 00:02:32.411106 | orchestrator | + subnet_id = (known after apply) 2026-03-13 00:02:32.411110 | orchestrator | } 2026-03-13 00:02:32.411114 | orchestrator | } 2026-03-13 00:02:32.411117 | orchestrator | 2026-03-13 00:02:32.411121 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-03-13 00:02:32.411125 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-13 00:02:32.411129 | orchestrator | + admin_state_up = (known after apply) 2026-03-13 00:02:32.411133 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-13 00:02:32.411136 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-13 00:02:32.411140 | orchestrator | + all_tags = (known after apply) 2026-03-13 00:02:32.411144 | orchestrator | + device_id = (known after apply) 2026-03-13 00:02:32.411148 | orchestrator | + device_owner = (known after apply) 2026-03-13 00:02:32.411152 | orchestrator | + dns_assignment = (known after apply) 2026-03-13 00:02:32.411155 | orchestrator | + dns_name = (known after apply) 2026-03-13 00:02:32.411159 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.411176 | orchestrator | + mac_address = (known after apply) 2026-03-13 00:02:32.411180 | orchestrator | + network_id = (known after apply) 2026-03-13 00:02:32.411183 | orchestrator | + port_security_enabled = (known after apply) 2026-03-13 00:02:32.411187 | orchestrator | + qos_policy_id = (known after apply) 2026-03-13 00:02:32.411191 | orchestrator | + region = (known after apply) 2026-03-13 00:02:32.411195 | orchestrator | + security_group_ids = (known after apply) 2026-03-13 00:02:32.411198 | orchestrator | + tenant_id = (known after apply) 2026-03-13 00:02:32.411202 | orchestrator | 2026-03-13 00:02:32.411206 | orchestrator | + allowed_address_pairs { 2026-03-13 00:02:32.411210 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-13 00:02:32.411214 | orchestrator | } 2026-03-13 00:02:32.411220 | orchestrator | + allowed_address_pairs { 2026-03-13 00:02:32.411224 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-13 00:02:32.411228 | orchestrator | } 2026-03-13 00:02:32.411231 | orchestrator | + allowed_address_pairs { 2026-03-13 00:02:32.411255 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-13 00:02:32.411259 | orchestrator | } 2026-03-13 00:02:32.411263 | orchestrator | 2026-03-13 00:02:32.411270 | orchestrator | + binding (known after apply) 2026-03-13 00:02:32.411274 | orchestrator | 2026-03-13 00:02:32.411278 | orchestrator | + fixed_ip { 2026-03-13 00:02:32.411282 | orchestrator | + ip_address = "192.168.16.13" 2026-03-13 00:02:32.411286 | orchestrator | + subnet_id = (known after apply) 2026-03-13 00:02:32.411289 | orchestrator | } 2026-03-13 00:02:32.411293 | orchestrator | } 2026-03-13 00:02:32.411297 | orchestrator | 2026-03-13 00:02:32.411301 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-03-13 00:02:32.411305 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-13 00:02:32.411308 | orchestrator | + admin_state_up = (known after apply) 2026-03-13 00:02:32.411312 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-13 00:02:32.411316 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-13 00:02:32.411320 | orchestrator | + all_tags = (known after apply) 2026-03-13 00:02:32.411324 | orchestrator | + device_id = (known after apply) 2026-03-13 00:02:32.411327 | orchestrator | + device_owner = (known after apply) 2026-03-13 00:02:32.411331 | orchestrator | + dns_assignment = (known after apply) 2026-03-13 00:02:32.411335 | orchestrator | + dns_name = (known after apply) 2026-03-13 00:02:32.411341 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.411345 | orchestrator | + mac_address = (known after apply) 2026-03-13 00:02:32.411349 | orchestrator | + network_id = (known after apply) 2026-03-13 00:02:32.411353 | orchestrator | + port_security_enabled = (known after apply) 2026-03-13 00:02:32.411357 | orchestrator | + qos_policy_id = (known after apply) 2026-03-13 00:02:32.411360 | orchestrator | + region = (known after apply) 2026-03-13 00:02:32.411364 | orchestrator | + security_group_ids = (known after apply) 2026-03-13 00:02:32.411368 | orchestrator | + tenant_id = (known after apply) 2026-03-13 00:02:32.411372 | orchestrator | 2026-03-13 00:02:32.411376 | orchestrator | + allowed_address_pairs { 2026-03-13 00:02:32.411383 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-13 00:02:32.411386 | orchestrator | } 2026-03-13 00:02:32.411390 | orchestrator | + allowed_address_pairs { 2026-03-13 00:02:32.411394 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-13 00:02:32.411398 | orchestrator | } 2026-03-13 00:02:32.411402 | orchestrator | + allowed_address_pairs { 2026-03-13 00:02:32.411405 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-13 00:02:32.411409 | orchestrator | } 2026-03-13 00:02:32.411413 | orchestrator | 2026-03-13 00:02:32.411417 | orchestrator | + binding (known after apply) 2026-03-13 00:02:32.411421 | orchestrator | 2026-03-13 00:02:32.411424 | orchestrator | + fixed_ip { 2026-03-13 00:02:32.411428 | orchestrator | + ip_address = "192.168.16.14" 2026-03-13 00:02:32.411432 | orchestrator | + subnet_id = (known after apply) 2026-03-13 00:02:32.411436 | orchestrator | } 2026-03-13 00:02:32.411439 | orchestrator | } 2026-03-13 00:02:32.411443 | orchestrator | 2026-03-13 00:02:32.411447 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-03-13 00:02:32.411451 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-13 00:02:32.411455 | orchestrator | + admin_state_up = (known after apply) 2026-03-13 00:02:32.411485 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-13 00:02:32.411490 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-13 00:02:32.411493 | orchestrator | + all_tags = (known after apply) 2026-03-13 00:02:32.411497 | orchestrator | + device_id = (known after apply) 2026-03-13 00:02:32.411501 | orchestrator | + device_owner = (known after apply) 2026-03-13 00:02:32.411505 | orchestrator | + dns_assignment = (known after apply) 2026-03-13 00:02:32.411509 | orchestrator | + dns_name = (known after apply) 2026-03-13 00:02:32.411512 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.411516 | orchestrator | + mac_address = (known after apply) 2026-03-13 00:02:32.411520 | orchestrator | + network_id = (known after apply) 2026-03-13 00:02:32.411524 | orchestrator | + port_security_enabled = (known after apply) 2026-03-13 00:02:32.411528 | orchestrator | + qos_policy_id = (known after apply) 2026-03-13 00:02:32.411535 | orchestrator | + region = (known after apply) 2026-03-13 00:02:32.411538 | orchestrator | + security_group_ids = (known after apply) 2026-03-13 00:02:32.411542 | orchestrator | + tenant_id = (known after apply) 2026-03-13 00:02:32.411546 | orchestrator | 2026-03-13 00:02:32.411550 | orchestrator | + allowed_address_pairs { 2026-03-13 00:02:32.411553 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-13 00:02:32.411557 | orchestrator | } 2026-03-13 00:02:32.411561 | orchestrator | + allowed_address_pairs { 2026-03-13 00:02:32.411565 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-13 00:02:32.411569 | orchestrator | } 2026-03-13 00:02:32.411572 | orchestrator | + allowed_address_pairs { 2026-03-13 00:02:32.411576 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-13 00:02:32.411580 | orchestrator | } 2026-03-13 00:02:32.411584 | orchestrator | 2026-03-13 00:02:32.411588 | orchestrator | + binding (known after apply) 2026-03-13 00:02:32.411591 | orchestrator | 2026-03-13 00:02:32.411595 | orchestrator | + fixed_ip { 2026-03-13 00:02:32.411599 | orchestrator | + ip_address = "192.168.16.15" 2026-03-13 00:02:32.411603 | orchestrator | + subnet_id = (known after apply) 2026-03-13 00:02:32.411607 | orchestrator | } 2026-03-13 00:02:32.411610 | orchestrator | } 2026-03-13 00:02:32.411614 | orchestrator | 2026-03-13 00:02:32.411634 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-03-13 00:02:32.411638 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-03-13 00:02:32.411642 | orchestrator | + force_destroy = false 2026-03-13 00:02:32.411646 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.411650 | orchestrator | + port_id = (known after apply) 2026-03-13 00:02:32.411654 | orchestrator | + region = (known after apply) 2026-03-13 00:02:32.411657 | orchestrator | + router_id = (known after apply) 2026-03-13 00:02:32.411661 | orchestrator | + subnet_id = (known after apply) 2026-03-13 00:02:32.411665 | orchestrator | } 2026-03-13 00:02:32.411669 | orchestrator | 2026-03-13 00:02:32.411673 | orchestrator | # openstack_networking_router_v2.router will be created 2026-03-13 00:02:32.411676 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-03-13 00:02:32.411680 | orchestrator | + admin_state_up = (known after apply) 2026-03-13 00:02:32.411684 | orchestrator | + all_tags = (known after apply) 2026-03-13 00:02:32.411688 | orchestrator | + availability_zone_hints = [ 2026-03-13 00:02:32.411692 | orchestrator | + "nova", 2026-03-13 00:02:32.411695 | orchestrator | ] 2026-03-13 00:02:32.411699 | orchestrator | + distributed = (known after apply) 2026-03-13 00:02:32.411703 | orchestrator | + enable_snat = (known after apply) 2026-03-13 00:02:32.411707 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-03-13 00:02:32.411711 | orchestrator | + external_qos_policy_id = (known after apply) 2026-03-13 00:02:32.411720 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.411724 | orchestrator | + name = "testbed" 2026-03-13 00:02:32.411728 | orchestrator | + region = (known after apply) 2026-03-13 00:02:32.411732 | orchestrator | + tenant_id = (known after apply) 2026-03-13 00:02:32.411736 | orchestrator | 2026-03-13 00:02:32.411739 | orchestrator | + external_fixed_ip (known after apply) 2026-03-13 00:02:32.411743 | orchestrator | } 2026-03-13 00:02:32.411747 | orchestrator | 2026-03-13 00:02:32.411751 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-03-13 00:02:32.411755 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-03-13 00:02:32.411758 | orchestrator | + description = "ssh" 2026-03-13 00:02:32.411762 | orchestrator | + direction = "ingress" 2026-03-13 00:02:32.411766 | orchestrator | + ethertype = "IPv4" 2026-03-13 00:02:32.411770 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.411774 | orchestrator | + port_range_max = 22 2026-03-13 00:02:32.411777 | orchestrator | + port_range_min = 22 2026-03-13 00:02:32.411781 | orchestrator | + protocol = "tcp" 2026-03-13 00:02:32.411785 | orchestrator | + region = (known after apply) 2026-03-13 00:02:32.411793 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-13 00:02:32.411797 | orchestrator | + remote_group_id = (known after apply) 2026-03-13 00:02:32.411801 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-13 00:02:32.411804 | orchestrator | + security_group_id = (known after apply) 2026-03-13 00:02:32.411808 | orchestrator | + tenant_id = (known after apply) 2026-03-13 00:02:32.411812 | orchestrator | } 2026-03-13 00:02:32.411816 | orchestrator | 2026-03-13 00:02:32.411820 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-03-13 00:02:32.411824 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-03-13 00:02:32.411827 | orchestrator | + description = "wireguard" 2026-03-13 00:02:32.411831 | orchestrator | + direction = "ingress" 2026-03-13 00:02:32.411835 | orchestrator | + ethertype = "IPv4" 2026-03-13 00:02:32.411839 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.411843 | orchestrator | + port_range_max = 51820 2026-03-13 00:02:32.411846 | orchestrator | + port_range_min = 51820 2026-03-13 00:02:32.411850 | orchestrator | + protocol = "udp" 2026-03-13 00:02:32.411854 | orchestrator | + region = (known after apply) 2026-03-13 00:02:32.411858 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-13 00:02:32.411862 | orchestrator | + remote_group_id = (known after apply) 2026-03-13 00:02:32.411865 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-13 00:02:32.411869 | orchestrator | + security_group_id = (known after apply) 2026-03-13 00:02:32.411873 | orchestrator | + tenant_id = (known after apply) 2026-03-13 00:02:32.411877 | orchestrator | } 2026-03-13 00:02:32.411881 | orchestrator | 2026-03-13 00:02:32.411884 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-03-13 00:02:32.411888 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-03-13 00:02:32.411895 | orchestrator | + direction = "ingress" 2026-03-13 00:02:32.411898 | orchestrator | + ethertype = "IPv4" 2026-03-13 00:02:32.411902 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.411906 | orchestrator | + protocol = "tcp" 2026-03-13 00:02:32.411910 | orchestrator | + region = (known after apply) 2026-03-13 00:02:32.411913 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-13 00:02:32.411917 | orchestrator | + remote_group_id = (known after apply) 2026-03-13 00:02:32.411921 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-13 00:02:32.411925 | orchestrator | + security_group_id = (known after apply) 2026-03-13 00:02:32.411928 | orchestrator | + tenant_id = (known after apply) 2026-03-13 00:02:32.411932 | orchestrator | } 2026-03-13 00:02:32.411936 | orchestrator | 2026-03-13 00:02:32.411940 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-03-13 00:02:32.411944 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-03-13 00:02:32.411948 | orchestrator | + direction = "ingress" 2026-03-13 00:02:32.411951 | orchestrator | + ethertype = "IPv4" 2026-03-13 00:02:32.411955 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.411959 | orchestrator | + protocol = "udp" 2026-03-13 00:02:32.411963 | orchestrator | + region = (known after apply) 2026-03-13 00:02:32.411966 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-13 00:02:32.411970 | orchestrator | + remote_group_id = (known after apply) 2026-03-13 00:02:32.411974 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-13 00:02:32.411978 | orchestrator | + security_group_id = (known after apply) 2026-03-13 00:02:32.411981 | orchestrator | + tenant_id = (known after apply) 2026-03-13 00:02:32.411985 | orchestrator | } 2026-03-13 00:02:32.411989 | orchestrator | 2026-03-13 00:02:32.411993 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-03-13 00:02:32.411999 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-03-13 00:02:32.412003 | orchestrator | + direction = "ingress" 2026-03-13 00:02:32.412007 | orchestrator | + ethertype = "IPv4" 2026-03-13 00:02:32.412011 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.412014 | orchestrator | + protocol = "icmp" 2026-03-13 00:02:32.412018 | orchestrator | + region = (known after apply) 2026-03-13 00:02:32.412022 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-13 00:02:32.412026 | orchestrator | + remote_group_id = (known after apply) 2026-03-13 00:02:32.412029 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-13 00:02:32.412033 | orchestrator | + security_group_id = (known after apply) 2026-03-13 00:02:32.412037 | orchestrator | + tenant_id = (known after apply) 2026-03-13 00:02:32.412041 | orchestrator | } 2026-03-13 00:02:32.412045 | orchestrator | 2026-03-13 00:02:32.412048 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-03-13 00:02:32.412052 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-03-13 00:02:32.412056 | orchestrator | + direction = "ingress" 2026-03-13 00:02:32.412062 | orchestrator | + ethertype = "IPv4" 2026-03-13 00:02:32.412066 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.412070 | orchestrator | + protocol = "tcp" 2026-03-13 00:02:32.412074 | orchestrator | + region = (known after apply) 2026-03-13 00:02:32.412078 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-13 00:02:32.412081 | orchestrator | + remote_group_id = (known after apply) 2026-03-13 00:02:32.412085 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-13 00:02:32.412089 | orchestrator | + security_group_id = (known after apply) 2026-03-13 00:02:32.412093 | orchestrator | + tenant_id = (known after apply) 2026-03-13 00:02:32.412096 | orchestrator | } 2026-03-13 00:02:32.412100 | orchestrator | 2026-03-13 00:02:32.412104 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-03-13 00:02:32.412108 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-03-13 00:02:32.412112 | orchestrator | + direction = "ingress" 2026-03-13 00:02:32.412115 | orchestrator | + ethertype = "IPv4" 2026-03-13 00:02:32.412119 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.412123 | orchestrator | + protocol = "udp" 2026-03-13 00:02:32.412127 | orchestrator | + region = (known after apply) 2026-03-13 00:02:32.412131 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-13 00:02:32.412134 | orchestrator | + remote_group_id = (known after apply) 2026-03-13 00:02:32.412138 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-13 00:02:32.412142 | orchestrator | + security_group_id = (known after apply) 2026-03-13 00:02:32.412146 | orchestrator | + tenant_id = (known after apply) 2026-03-13 00:02:32.412149 | orchestrator | } 2026-03-13 00:02:32.412153 | orchestrator | 2026-03-13 00:02:32.412157 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-03-13 00:02:32.412161 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-03-13 00:02:32.412165 | orchestrator | + direction = "ingress" 2026-03-13 00:02:32.412168 | orchestrator | + ethertype = "IPv4" 2026-03-13 00:02:32.412172 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.412176 | orchestrator | + protocol = "icmp" 2026-03-13 00:02:32.412180 | orchestrator | + region = (known after apply) 2026-03-13 00:02:32.412184 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-13 00:02:32.412187 | orchestrator | + remote_group_id = (known after apply) 2026-03-13 00:02:32.412191 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-13 00:02:32.412195 | orchestrator | + security_group_id = (known after apply) 2026-03-13 00:02:32.412199 | orchestrator | + tenant_id = (known after apply) 2026-03-13 00:02:32.412206 | orchestrator | } 2026-03-13 00:02:32.412210 | orchestrator | 2026-03-13 00:02:32.412214 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-03-13 00:02:32.412217 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-03-13 00:02:32.412221 | orchestrator | + description = "vrrp" 2026-03-13 00:02:32.412225 | orchestrator | + direction = "ingress" 2026-03-13 00:02:32.412229 | orchestrator | + ethertype = "IPv4" 2026-03-13 00:02:32.412233 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.412236 | orchestrator | + protocol = "112" 2026-03-13 00:02:32.412240 | orchestrator | + region = (known after apply) 2026-03-13 00:02:32.412244 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-13 00:02:32.412248 | orchestrator | + remote_group_id = (known after apply) 2026-03-13 00:02:32.412251 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-13 00:02:32.412255 | orchestrator | + security_group_id = (known after apply) 2026-03-13 00:02:32.412259 | orchestrator | + tenant_id = (known after apply) 2026-03-13 00:02:32.412263 | orchestrator | } 2026-03-13 00:02:32.412267 | orchestrator | 2026-03-13 00:02:32.412270 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-03-13 00:02:32.412274 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-03-13 00:02:32.412278 | orchestrator | + all_tags = (known after apply) 2026-03-13 00:02:32.412282 | orchestrator | + description = "management security group" 2026-03-13 00:02:32.412286 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.412289 | orchestrator | + name = "testbed-management" 2026-03-13 00:02:32.412293 | orchestrator | + region = (known after apply) 2026-03-13 00:02:32.412297 | orchestrator | + stateful = (known after apply) 2026-03-13 00:02:32.412301 | orchestrator | + tenant_id = (known after apply) 2026-03-13 00:02:32.412304 | orchestrator | } 2026-03-13 00:02:32.412308 | orchestrator | 2026-03-13 00:02:32.412312 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-03-13 00:02:32.412334 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-03-13 00:02:32.412338 | orchestrator | + all_tags = (known after apply) 2026-03-13 00:02:32.412342 | orchestrator | + description = "node security group" 2026-03-13 00:02:32.412346 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.412349 | orchestrator | + name = "testbed-node" 2026-03-13 00:02:32.412353 | orchestrator | + region = (known after apply) 2026-03-13 00:02:32.412357 | orchestrator | + stateful = (known after apply) 2026-03-13 00:02:32.412361 | orchestrator | + tenant_id = (known after apply) 2026-03-13 00:02:32.412364 | orchestrator | } 2026-03-13 00:02:32.412368 | orchestrator | 2026-03-13 00:02:32.412372 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-03-13 00:02:32.412376 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-03-13 00:02:32.412379 | orchestrator | + all_tags = (known after apply) 2026-03-13 00:02:32.412383 | orchestrator | + cidr = "192.168.16.0/20" 2026-03-13 00:02:32.412387 | orchestrator | + dns_nameservers = [ 2026-03-13 00:02:32.412390 | orchestrator | + "8.8.8.8", 2026-03-13 00:02:32.412394 | orchestrator | + "9.9.9.9", 2026-03-13 00:02:32.412398 | orchestrator | ] 2026-03-13 00:02:32.412402 | orchestrator | + enable_dhcp = true 2026-03-13 00:02:32.412406 | orchestrator | + gateway_ip = (known after apply) 2026-03-13 00:02:32.412412 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.412416 | orchestrator | + ip_version = 4 2026-03-13 00:02:32.412420 | orchestrator | + ipv6_address_mode = (known after apply) 2026-03-13 00:02:32.412424 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-03-13 00:02:32.412427 | orchestrator | + name = "subnet-testbed-management" 2026-03-13 00:02:32.412431 | orchestrator | + network_id = (known after apply) 2026-03-13 00:02:32.412437 | orchestrator | + no_gateway = false 2026-03-13 00:02:32.412441 | orchestrator | + region = (known after apply) 2026-03-13 00:02:32.412445 | orchestrator | + service_types = (known after apply) 2026-03-13 00:02:32.412452 | orchestrator | + tenant_id = (known after apply) 2026-03-13 00:02:32.412455 | orchestrator | 2026-03-13 00:02:32.412470 | orchestrator | + allocation_pool { 2026-03-13 00:02:32.412474 | orchestrator | + end = "192.168.31.250" 2026-03-13 00:02:32.412478 | orchestrator | + start = "192.168.31.200" 2026-03-13 00:02:32.412482 | orchestrator | } 2026-03-13 00:02:32.412486 | orchestrator | } 2026-03-13 00:02:32.412489 | orchestrator | 2026-03-13 00:02:32.412493 | orchestrator | # terraform_data.image will be created 2026-03-13 00:02:32.412497 | orchestrator | + resource "terraform_data" "image" { 2026-03-13 00:02:32.412500 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.412504 | orchestrator | + input = "Ubuntu 24.04" 2026-03-13 00:02:32.412508 | orchestrator | + output = (known after apply) 2026-03-13 00:02:32.412512 | orchestrator | } 2026-03-13 00:02:32.412515 | orchestrator | 2026-03-13 00:02:32.412519 | orchestrator | # terraform_data.image_node will be created 2026-03-13 00:02:32.412523 | orchestrator | + resource "terraform_data" "image_node" { 2026-03-13 00:02:32.412526 | orchestrator | + id = (known after apply) 2026-03-13 00:02:32.412530 | orchestrator | + input = "Ubuntu 24.04" 2026-03-13 00:02:32.412534 | orchestrator | + output = (known after apply) 2026-03-13 00:02:32.412538 | orchestrator | } 2026-03-13 00:02:32.412541 | orchestrator | 2026-03-13 00:02:32.412545 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-03-13 00:02:32.412549 | orchestrator | 2026-03-13 00:02:32.412552 | orchestrator | Changes to Outputs: 2026-03-13 00:02:32.412556 | orchestrator | + manager_address = (sensitive value) 2026-03-13 00:02:32.412560 | orchestrator | + private_key = (sensitive value) 2026-03-13 00:02:32.684820 | orchestrator | terraform_data.image: Creating... 2026-03-13 00:02:32.684879 | orchestrator | terraform_data.image: Creation complete after 0s [id=0957283f-bd32-6525-f26a-bb48e270e74d] 2026-03-13 00:02:32.690140 | orchestrator | terraform_data.image_node: Creating... 2026-03-13 00:02:32.690206 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=10bd5168-96e1-e95b-dca5-837cc78c9faf] 2026-03-13 00:02:32.704783 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-03-13 00:02:32.705414 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-03-13 00:02:32.705513 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-03-13 00:02:32.716357 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-03-13 00:02:32.717328 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-03-13 00:02:32.718983 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-03-13 00:02:32.719851 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-03-13 00:02:32.725665 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-03-13 00:02:32.726446 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-03-13 00:02:32.739121 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-03-13 00:02:33.211678 | orchestrator | data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-13 00:02:33.543993 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-03-13 00:02:33.544046 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-13 00:02:33.544053 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-03-13 00:02:33.544059 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2026-03-13 00:02:33.544065 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-03-13 00:02:33.851023 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=3622e7e1-dde8-4459-87aa-b27ea89f43ad] 2026-03-13 00:02:33.859929 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-03-13 00:02:36.441621 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=2fefda09-8576-4844-bc1b-e9a7eb3ad8aa] 2026-03-13 00:02:36.447508 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-03-13 00:02:36.455641 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=9a254b57-f2ae-4287-95a0-937fffba734e] 2026-03-13 00:02:36.473451 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-03-13 00:02:36.478098 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=5123065a-17ef-4227-8b29-db8d7701c704] 2026-03-13 00:02:36.478151 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=0e74aa03-0dd7-4a4d-94fb-6534b2ad29b5] 2026-03-13 00:02:36.484318 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-03-13 00:02:36.485711 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-03-13 00:02:36.489372 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=9d580995040879c63dccffbe40cd73dd3a68da9c] 2026-03-13 00:02:36.510950 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-03-13 00:02:36.517682 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=e5b6d572-8591-43aa-97f9-3b718c2d248a] 2026-03-13 00:02:36.521310 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-03-13 00:02:36.544303 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=dc527160-e7af-4d74-be06-07ea7bd10a9b] 2026-03-13 00:02:36.550281 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-03-13 00:02:36.561441 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=b47ce045-806c-4f33-b887-31de2316680c] 2026-03-13 00:02:36.572178 | orchestrator | local_file.id_rsa_pub: Creating... 2026-03-13 00:02:36.575511 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=c83b08c7af8e9f1f888cc6b0d2777b0c2a97c005] 2026-03-13 00:02:36.581129 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-03-13 00:02:36.718601 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=173613da-cd5d-4175-9e2e-faf4092bf0a3] 2026-03-13 00:02:36.877242 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=e49f76b8-3d49-472e-b9d5-6b475ff66b1a] 2026-03-13 00:02:37.231916 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=9443f19b-dbaf-413d-91d9-96cb87100ab7] 2026-03-13 00:02:38.364949 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=4f18755b-a97d-46c1-9e80-40a364db88ae] 2026-03-13 00:02:38.371986 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-03-13 00:02:40.009298 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=9bfdf0a9-a88b-432c-bbdd-eaea61a071f8] 2026-03-13 00:02:40.029794 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=5a0d7d0f-4636-493e-803a-05680bb9c3f6] 2026-03-13 00:02:40.065531 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=424b80b3-bd2d-4fbf-95b2-3708ce35a18a] 2026-03-13 00:02:40.077146 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=50cdff76-9cd5-47b7-8bb7-718e614446bf] 2026-03-13 00:02:40.092263 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=97f75f20-579f-4518-b7ce-4d90969f977d] 2026-03-13 00:02:40.162695 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=b2c3f3a5-d054-4214-843e-d9b33fe0d233] 2026-03-13 00:02:41.663826 | orchestrator | openstack_networking_router_v2.router: Creation complete after 4s [id=d49573c3-c90c-4614-b3bc-00ed9ada1ba0] 2026-03-13 00:02:41.669672 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-03-13 00:02:41.669835 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-03-13 00:02:41.670434 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-03-13 00:02:42.148476 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=02e31c43-528d-4ef2-8312-b21bbbd1c349] 2026-03-13 00:02:42.157269 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-03-13 00:02:42.157336 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-03-13 00:02:42.157344 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-03-13 00:02:42.158976 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-03-13 00:02:42.160022 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-03-13 00:02:42.161066 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-03-13 00:02:42.206712 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=26b62bc4-df3b-44ec-9349-ad1de93026e9] 2026-03-13 00:02:42.211490 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-03-13 00:02:42.211547 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-03-13 00:02:42.211563 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-03-13 00:02:42.565759 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=06e3644f-5fef-4c49-adc8-161e2a0d6b7b] 2026-03-13 00:02:42.576783 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-03-13 00:02:42.784103 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=92208903-990c-4af4-b467-aa448fcfefb9] 2026-03-13 00:02:42.796756 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-03-13 00:02:43.117597 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=e091ddd2-37f5-4a4c-aa32-5a591455373d] 2026-03-13 00:02:43.131856 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-03-13 00:02:43.243719 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=eb1ef0e9-2bea-41f6-928c-1cfdcc6820c4] 2026-03-13 00:02:43.254784 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-03-13 00:02:43.521720 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 2s [id=1332f06e-44c3-4c84-94a1-8fb2acfa0280] 2026-03-13 00:02:44.954669 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-03-13 00:02:44.954741 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=10edf501-0373-498e-83cc-9fc83f93ca2d] 2026-03-13 00:02:44.954757 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-03-13 00:02:44.954790 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 2s [id=5e05a1ea-72fa-4d49-86d9-bfebc9319a09] 2026-03-13 00:02:44.954803 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-03-13 00:02:44.954816 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 2s [id=b12f61a2-3d76-46b4-a262-e798a0395b82] 2026-03-13 00:02:44.954828 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=1f6c6a32-8f8f-4060-8cea-a65a19859844] 2026-03-13 00:02:44.954839 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=d4ff417b-f2e4-403f-b174-02b447740d6d] 2026-03-13 00:02:44.954851 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=aad53b61-8ce9-407b-b8ba-7ba3987fe403] 2026-03-13 00:02:44.954883 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 3s [id=79dd945b-d66d-4741-b58c-2d268fb3c4e3] 2026-03-13 00:02:45.027204 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 3s [id=ac052cf1-405f-49a1-93a4-a9e3f65b0dfe] 2026-03-13 00:02:45.282863 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 2s [id=96ab7177-1ad5-4cd1-965d-133ae248ee0e] 2026-03-13 00:02:45.469681 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=9b8676e4-64c5-4f77-b1a1-83e8b2677b31] 2026-03-13 00:02:45.775110 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 2s [id=4bbebad7-d66d-4323-8d58-a58fe154918a] 2026-03-13 00:02:46.364078 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 4s [id=8bf03256-b75c-4d1f-be9f-97a9d787545e] 2026-03-13 00:02:46.387873 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-03-13 00:02:46.396050 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-03-13 00:02:46.413695 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-03-13 00:02:46.414272 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-03-13 00:02:46.414916 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-03-13 00:02:46.427156 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-03-13 00:02:46.429714 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-03-13 00:02:48.473106 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=ed9b58ed-fe7c-426c-9906-f2c7234d42ab] 2026-03-13 00:02:48.482082 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-03-13 00:02:48.488971 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-03-13 00:02:48.490305 | orchestrator | local_file.inventory: Creating... 2026-03-13 00:02:48.496162 | orchestrator | local_file.inventory: Creation complete after 0s [id=bcd9ff6eac8513a4aabd0e74aed9221079f5bedb] 2026-03-13 00:02:48.497331 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=593fd305d62d7cef14a29c23250522939f164850] 2026-03-13 00:02:49.302359 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=ed9b58ed-fe7c-426c-9906-f2c7234d42ab] 2026-03-13 00:02:56.408914 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-03-13 00:02:56.418103 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-03-13 00:02:56.418176 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-03-13 00:02:56.418199 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-03-13 00:02:56.429462 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-03-13 00:02:56.432800 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-03-13 00:03:06.418208 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-03-13 00:03:06.418290 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-03-13 00:03:06.418296 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-03-13 00:03:06.418301 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-03-13 00:03:06.430792 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-03-13 00:03:06.434381 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-03-13 00:03:16.426377 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-03-13 00:03:16.426503 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-03-13 00:03:16.426515 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-03-13 00:03:16.426533 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-03-13 00:03:16.431742 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-03-13 00:03:16.435080 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-03-13 00:03:17.212339 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=376f6f62-8d64-4152-ac2a-335282506eac] 2026-03-13 00:03:26.426751 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [40s elapsed] 2026-03-13 00:03:26.426932 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [40s elapsed] 2026-03-13 00:03:26.426971 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2026-03-13 00:03:26.427040 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2026-03-13 00:03:26.435459 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2026-03-13 00:03:27.236959 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 41s [id=6db090df-36c8-4aa8-8ee8-2a0fc6cd5789] 2026-03-13 00:03:27.328756 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 41s [id=d60fdd7a-f067-41b0-9033-57f725f5c7b4] 2026-03-13 00:03:36.436055 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [50s elapsed] 2026-03-13 00:03:36.436281 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [50s elapsed] 2026-03-13 00:03:36.436302 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [50s elapsed] 2026-03-13 00:03:37.421339 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 51s [id=72c9d49d-2023-45f2-bcf9-9ffe31bb8594] 2026-03-13 00:03:37.421468 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 51s [id=c0e9bd7b-4288-43a7-8e6d-70246dc84f87] 2026-03-13 00:03:46.438106 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [1m0s elapsed] 2026-03-13 00:03:47.545062 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 1m2s [id=abe78852-de10-4789-a54f-3f8046529dfb] 2026-03-13 00:03:47.553263 | orchestrator | null_resource.node_semaphore: Creating... 2026-03-13 00:03:47.557502 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=5527190768749148301] 2026-03-13 00:03:47.570839 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-03-13 00:03:47.571651 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-03-13 00:03:47.580884 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-03-13 00:03:47.581613 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-03-13 00:03:47.594550 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-03-13 00:03:47.596716 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-03-13 00:03:47.607182 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-03-13 00:03:47.607856 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-03-13 00:03:47.609364 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-03-13 00:03:47.614274 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-03-13 00:03:51.218696 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 3s [id=d60fdd7a-f067-41b0-9033-57f725f5c7b4/2fefda09-8576-4844-bc1b-e9a7eb3ad8aa] 2026-03-13 00:03:51.237972 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 3s [id=376f6f62-8d64-4152-ac2a-335282506eac/b47ce045-806c-4f33-b887-31de2316680c] 2026-03-13 00:03:51.247754 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 3s [id=c0e9bd7b-4288-43a7-8e6d-70246dc84f87/e49f76b8-3d49-472e-b9d5-6b475ff66b1a] 2026-03-13 00:03:51.293558 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 3s [id=d60fdd7a-f067-41b0-9033-57f725f5c7b4/dc527160-e7af-4d74-be06-07ea7bd10a9b] 2026-03-13 00:03:51.325857 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 3s [id=c0e9bd7b-4288-43a7-8e6d-70246dc84f87/5123065a-17ef-4227-8b29-db8d7701c704] 2026-03-13 00:03:51.347717 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 3s [id=376f6f62-8d64-4152-ac2a-335282506eac/e5b6d572-8591-43aa-97f9-3b718c2d248a] 2026-03-13 00:03:57.438360 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 9s [id=d60fdd7a-f067-41b0-9033-57f725f5c7b4/173613da-cd5d-4175-9e2e-faf4092bf0a3] 2026-03-13 00:03:57.448948 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 9s [id=376f6f62-8d64-4152-ac2a-335282506eac/0e74aa03-0dd7-4a4d-94fb-6534b2ad29b5] 2026-03-13 00:03:57.465641 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 9s [id=c0e9bd7b-4288-43a7-8e6d-70246dc84f87/9a254b57-f2ae-4287-95a0-937fffba734e] 2026-03-13 00:03:57.612657 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-03-13 00:04:07.616104 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-03-13 00:04:08.347880 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=642122ea-9560-4d48-b365-d3ac65d6e8c8] 2026-03-13 00:04:08.363326 | orchestrator | 2026-03-13 00:04:08.363495 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-03-13 00:04:08.363505 | orchestrator | 2026-03-13 00:04:08.363509 | orchestrator | Outputs: 2026-03-13 00:04:08.363514 | orchestrator | 2026-03-13 00:04:08.363518 | orchestrator | manager_address = 2026-03-13 00:04:08.363522 | orchestrator | private_key = 2026-03-13 00:04:08.531073 | orchestrator | ok: Runtime: 0:01:40.421096 2026-03-13 00:04:08.566981 | 2026-03-13 00:04:08.567120 | TASK [Create infrastructure (stable)] 2026-03-13 00:04:09.121874 | orchestrator | skipping: Conditional result was False 2026-03-13 00:04:09.142433 | 2026-03-13 00:04:09.142603 | TASK [Fetch manager address] 2026-03-13 00:04:09.661960 | orchestrator | ok 2026-03-13 00:04:09.668796 | 2026-03-13 00:04:09.668887 | TASK [Set manager_host address] 2026-03-13 00:04:09.731112 | orchestrator | ok 2026-03-13 00:04:09.739890 | 2026-03-13 00:04:09.739985 | LOOP [Update ansible collections] 2026-03-13 00:04:11.108937 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-13 00:04:11.109315 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-13 00:04:11.109379 | orchestrator | Starting galaxy collection install process 2026-03-13 00:04:11.109422 | orchestrator | Process install dependency map 2026-03-13 00:04:11.109460 | orchestrator | Starting collection install process 2026-03-13 00:04:11.109495 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons' 2026-03-13 00:04:11.109538 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons 2026-03-13 00:04:11.109593 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-03-13 00:04:11.109677 | orchestrator | ok: Item: commons Runtime: 0:00:00.936117 2026-03-13 00:04:12.293575 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-13 00:04:12.293682 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-13 00:04:12.293716 | orchestrator | Starting galaxy collection install process 2026-03-13 00:04:12.293742 | orchestrator | Process install dependency map 2026-03-13 00:04:12.293775 | orchestrator | Starting collection install process 2026-03-13 00:04:12.293798 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed01/.ansible/collections/ansible_collections/osism/services' 2026-03-13 00:04:12.293821 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/services 2026-03-13 00:04:12.293843 | orchestrator | osism.services:999.0.0 was installed successfully 2026-03-13 00:04:12.293875 | orchestrator | ok: Item: services Runtime: 0:00:00.897267 2026-03-13 00:04:12.313467 | 2026-03-13 00:04:12.313627 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-13 00:04:22.975202 | orchestrator | ok 2026-03-13 00:04:22.987114 | 2026-03-13 00:04:22.987288 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-13 00:05:23.034326 | orchestrator | ok 2026-03-13 00:05:23.046991 | 2026-03-13 00:05:23.047131 | TASK [Fetch manager ssh hostkey] 2026-03-13 00:05:24.630045 | orchestrator | Output suppressed because no_log was given 2026-03-13 00:05:24.645566 | 2026-03-13 00:05:24.645758 | TASK [Get ssh keypair from terraform environment] 2026-03-13 00:05:25.193742 | orchestrator | ok: Runtime: 0:00:00.005230 2026-03-13 00:05:25.212408 | 2026-03-13 00:05:25.212618 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-13 00:05:25.251002 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-03-13 00:05:25.261637 | 2026-03-13 00:05:25.261771 | TASK [Run manager part 0] 2026-03-13 00:05:26.311485 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-13 00:05:26.371994 | orchestrator | 2026-03-13 00:05:26.372052 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-03-13 00:05:26.372062 | orchestrator | 2026-03-13 00:05:26.372083 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-03-13 00:05:28.116303 | orchestrator | ok: [testbed-manager] 2026-03-13 00:05:28.116371 | orchestrator | 2026-03-13 00:05:28.116393 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-13 00:05:28.116401 | orchestrator | 2026-03-13 00:05:28.116409 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-13 00:05:30.004546 | orchestrator | ok: [testbed-manager] 2026-03-13 00:05:30.004621 | orchestrator | 2026-03-13 00:05:30.004632 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-13 00:05:30.790082 | orchestrator | ok: [testbed-manager] 2026-03-13 00:05:30.790148 | orchestrator | 2026-03-13 00:05:30.790159 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-13 00:05:30.840523 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:05:30.840578 | orchestrator | 2026-03-13 00:05:30.840590 | orchestrator | TASK [Update package cache] **************************************************** 2026-03-13 00:05:30.882210 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:05:30.882284 | orchestrator | 2026-03-13 00:05:30.882296 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-13 00:05:30.917180 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:05:30.917248 | orchestrator | 2026-03-13 00:05:30.917259 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-13 00:05:30.960897 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:05:30.960958 | orchestrator | 2026-03-13 00:05:30.960966 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-13 00:05:30.995642 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:05:30.995704 | orchestrator | 2026-03-13 00:05:30.995714 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-03-13 00:05:31.027620 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:05:31.027668 | orchestrator | 2026-03-13 00:05:31.027676 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-03-13 00:05:31.062503 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:05:31.062558 | orchestrator | 2026-03-13 00:05:31.062566 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-03-13 00:05:31.755159 | orchestrator | changed: [testbed-manager] 2026-03-13 00:05:31.755224 | orchestrator | 2026-03-13 00:05:31.755234 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-03-13 00:08:22.781200 | orchestrator | changed: [testbed-manager] 2026-03-13 00:08:22.781285 | orchestrator | 2026-03-13 00:08:22.781301 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-13 00:10:10.576094 | orchestrator | changed: [testbed-manager] 2026-03-13 00:10:10.576150 | orchestrator | 2026-03-13 00:10:10.576162 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-13 00:10:32.262150 | orchestrator | changed: [testbed-manager] 2026-03-13 00:10:32.262228 | orchestrator | 2026-03-13 00:10:32.262242 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-13 00:10:41.606326 | orchestrator | changed: [testbed-manager] 2026-03-13 00:10:41.606398 | orchestrator | 2026-03-13 00:10:41.606410 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-13 00:10:41.643744 | orchestrator | ok: [testbed-manager] 2026-03-13 00:10:41.643800 | orchestrator | 2026-03-13 00:10:41.643807 | orchestrator | TASK [Get current user] ******************************************************** 2026-03-13 00:10:42.431904 | orchestrator | ok: [testbed-manager] 2026-03-13 00:10:42.431975 | orchestrator | 2026-03-13 00:10:42.431986 | orchestrator | TASK [Create venv directory] *************************************************** 2026-03-13 00:10:43.159582 | orchestrator | changed: [testbed-manager] 2026-03-13 00:10:43.159644 | orchestrator | 2026-03-13 00:10:43.159654 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-03-13 00:10:50.178750 | orchestrator | changed: [testbed-manager] 2026-03-13 00:10:50.178816 | orchestrator | 2026-03-13 00:10:50.178847 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-03-13 00:10:56.117758 | orchestrator | changed: [testbed-manager] 2026-03-13 00:10:56.117845 | orchestrator | 2026-03-13 00:10:56.117864 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-03-13 00:10:58.664571 | orchestrator | changed: [testbed-manager] 2026-03-13 00:10:58.665024 | orchestrator | 2026-03-13 00:10:58.665057 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-03-13 00:11:00.388934 | orchestrator | changed: [testbed-manager] 2026-03-13 00:11:00.388981 | orchestrator | 2026-03-13 00:11:00.388992 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-03-13 00:11:01.487211 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-13 00:11:01.487303 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-13 00:11:01.487319 | orchestrator | 2026-03-13 00:11:01.487332 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-03-13 00:11:01.522792 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-13 00:11:01.522869 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-13 00:11:01.522882 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-13 00:11:01.522895 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-13 00:11:08.700609 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-13 00:11:08.700745 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-13 00:11:08.700760 | orchestrator | 2026-03-13 00:11:08.700770 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-03-13 00:11:09.279377 | orchestrator | changed: [testbed-manager] 2026-03-13 00:11:09.279457 | orchestrator | 2026-03-13 00:11:09.279474 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-03-13 00:13:29.769200 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-03-13 00:13:29.769311 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-03-13 00:13:29.769326 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-03-13 00:13:29.769336 | orchestrator | 2026-03-13 00:13:29.769346 | orchestrator | TASK [Install local collections] *********************************************** 2026-03-13 00:13:32.054636 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-03-13 00:13:32.054726 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-03-13 00:13:32.054743 | orchestrator | 2026-03-13 00:13:32.054756 | orchestrator | PLAY [Create operator user] **************************************************** 2026-03-13 00:13:32.054768 | orchestrator | 2026-03-13 00:13:32.054780 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-13 00:13:33.427201 | orchestrator | ok: [testbed-manager] 2026-03-13 00:13:33.427289 | orchestrator | 2026-03-13 00:13:33.427309 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-13 00:13:33.474625 | orchestrator | ok: [testbed-manager] 2026-03-13 00:13:33.474685 | orchestrator | 2026-03-13 00:13:33.474695 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-13 00:13:33.543292 | orchestrator | ok: [testbed-manager] 2026-03-13 00:13:33.543370 | orchestrator | 2026-03-13 00:13:33.543385 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-13 00:13:34.324553 | orchestrator | changed: [testbed-manager] 2026-03-13 00:13:34.324595 | orchestrator | 2026-03-13 00:13:34.324603 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-13 00:13:35.059931 | orchestrator | changed: [testbed-manager] 2026-03-13 00:13:35.059977 | orchestrator | 2026-03-13 00:13:35.059984 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-13 00:13:36.393952 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-03-13 00:13:36.394073 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-03-13 00:13:36.394092 | orchestrator | 2026-03-13 00:13:36.394121 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-13 00:13:37.847193 | orchestrator | changed: [testbed-manager] 2026-03-13 00:13:37.847348 | orchestrator | 2026-03-13 00:13:37.847369 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-13 00:13:39.595558 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-03-13 00:13:39.595642 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-03-13 00:13:39.595656 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-03-13 00:13:39.595668 | orchestrator | 2026-03-13 00:13:39.595681 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-13 00:13:39.651057 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:13:39.651148 | orchestrator | 2026-03-13 00:13:39.651167 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-13 00:13:39.720389 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:13:39.720479 | orchestrator | 2026-03-13 00:13:39.720499 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-13 00:13:40.302151 | orchestrator | changed: [testbed-manager] 2026-03-13 00:13:40.302572 | orchestrator | 2026-03-13 00:13:40.302592 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-13 00:13:40.370415 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:13:40.370499 | orchestrator | 2026-03-13 00:13:40.370515 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-13 00:13:41.238639 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-13 00:13:41.238733 | orchestrator | changed: [testbed-manager] 2026-03-13 00:13:41.238750 | orchestrator | 2026-03-13 00:13:41.238762 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-13 00:13:41.279268 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:13:41.279352 | orchestrator | 2026-03-13 00:13:41.279368 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-13 00:13:41.320004 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:13:41.320075 | orchestrator | 2026-03-13 00:13:41.320085 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-13 00:13:41.355858 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:13:41.355943 | orchestrator | 2026-03-13 00:13:41.355960 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-13 00:13:41.432953 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:13:41.433006 | orchestrator | 2026-03-13 00:13:41.433013 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-13 00:13:42.163374 | orchestrator | ok: [testbed-manager] 2026-03-13 00:13:42.163441 | orchestrator | 2026-03-13 00:13:42.163451 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-13 00:13:42.163458 | orchestrator | 2026-03-13 00:13:42.163465 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-13 00:13:43.540866 | orchestrator | ok: [testbed-manager] 2026-03-13 00:13:43.540958 | orchestrator | 2026-03-13 00:13:43.540976 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-03-13 00:13:44.493655 | orchestrator | changed: [testbed-manager] 2026-03-13 00:13:44.493772 | orchestrator | 2026-03-13 00:13:44.493779 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 00:13:44.493785 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-03-13 00:13:44.493789 | orchestrator | 2026-03-13 00:13:45.082479 | orchestrator | ok: Runtime: 0:08:19.037123 2026-03-13 00:13:45.103611 | 2026-03-13 00:13:45.103791 | TASK [Point out that the log in on the manager is now possible] 2026-03-13 00:13:45.154540 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-03-13 00:13:45.171820 | 2026-03-13 00:13:45.172088 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-13 00:13:45.223676 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-03-13 00:13:45.234220 | 2026-03-13 00:13:45.234359 | TASK [Run manager part 1 + 2] 2026-03-13 00:13:46.738434 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-13 00:13:46.805704 | orchestrator | 2026-03-13 00:13:46.805756 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-03-13 00:13:46.805763 | orchestrator | 2026-03-13 00:13:46.805775 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-13 00:13:49.639278 | orchestrator | ok: [testbed-manager] 2026-03-13 00:13:49.639960 | orchestrator | 2026-03-13 00:13:49.640026 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-13 00:13:49.684639 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:13:49.684747 | orchestrator | 2026-03-13 00:13:49.684767 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-13 00:13:49.731713 | orchestrator | ok: [testbed-manager] 2026-03-13 00:13:49.731804 | orchestrator | 2026-03-13 00:13:49.731823 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-13 00:13:49.785044 | orchestrator | ok: [testbed-manager] 2026-03-13 00:13:49.785135 | orchestrator | 2026-03-13 00:13:49.785156 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-13 00:13:49.861950 | orchestrator | ok: [testbed-manager] 2026-03-13 00:13:49.862226 | orchestrator | 2026-03-13 00:13:49.862256 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-13 00:13:49.925245 | orchestrator | ok: [testbed-manager] 2026-03-13 00:13:49.925297 | orchestrator | 2026-03-13 00:13:49.925305 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-13 00:13:49.980065 | orchestrator | included: /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-03-13 00:13:49.980154 | orchestrator | 2026-03-13 00:13:49.980173 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-13 00:13:50.688303 | orchestrator | ok: [testbed-manager] 2026-03-13 00:13:50.688384 | orchestrator | 2026-03-13 00:13:50.688400 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-13 00:13:50.744599 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:13:50.744663 | orchestrator | 2026-03-13 00:13:50.744674 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-13 00:13:52.120953 | orchestrator | changed: [testbed-manager] 2026-03-13 00:13:52.121013 | orchestrator | 2026-03-13 00:13:52.121025 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-13 00:13:52.700972 | orchestrator | ok: [testbed-manager] 2026-03-13 00:13:52.701026 | orchestrator | 2026-03-13 00:13:52.701034 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-13 00:13:53.820696 | orchestrator | changed: [testbed-manager] 2026-03-13 00:13:53.820760 | orchestrator | 2026-03-13 00:13:53.820778 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-13 00:14:08.688687 | orchestrator | changed: [testbed-manager] 2026-03-13 00:14:08.688800 | orchestrator | 2026-03-13 00:14:08.688818 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-13 00:14:09.341738 | orchestrator | ok: [testbed-manager] 2026-03-13 00:14:09.341813 | orchestrator | 2026-03-13 00:14:09.341831 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-13 00:14:09.397391 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:14:09.397449 | orchestrator | 2026-03-13 00:14:09.397457 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-03-13 00:14:10.363102 | orchestrator | changed: [testbed-manager] 2026-03-13 00:14:10.363189 | orchestrator | 2026-03-13 00:14:10.363205 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-03-13 00:14:11.286439 | orchestrator | changed: [testbed-manager] 2026-03-13 00:14:11.286501 | orchestrator | 2026-03-13 00:14:11.286536 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-03-13 00:14:11.855892 | orchestrator | changed: [testbed-manager] 2026-03-13 00:14:11.855975 | orchestrator | 2026-03-13 00:14:11.855993 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-03-13 00:14:11.896401 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-13 00:14:11.896462 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-13 00:14:11.896468 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-13 00:14:11.896473 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-13 00:14:14.025773 | orchestrator | changed: [testbed-manager] 2026-03-13 00:14:14.025867 | orchestrator | 2026-03-13 00:14:14.025889 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-03-13 00:14:22.635821 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-03-13 00:14:22.635859 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-03-13 00:14:22.635868 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-03-13 00:14:22.635873 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-03-13 00:14:22.635881 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-03-13 00:14:22.635886 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-03-13 00:14:22.635891 | orchestrator | 2026-03-13 00:14:22.635896 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-03-13 00:14:23.672920 | orchestrator | changed: [testbed-manager] 2026-03-13 00:14:23.672959 | orchestrator | 2026-03-13 00:14:23.672967 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-03-13 00:14:23.709027 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:14:23.709085 | orchestrator | 2026-03-13 00:14:23.709095 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-03-13 00:14:26.783057 | orchestrator | changed: [testbed-manager] 2026-03-13 00:14:26.783154 | orchestrator | 2026-03-13 00:14:26.783172 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-03-13 00:14:26.822801 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:14:26.822865 | orchestrator | 2026-03-13 00:14:26.822875 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-03-13 00:16:05.907447 | orchestrator | changed: [testbed-manager] 2026-03-13 00:16:05.907603 | orchestrator | 2026-03-13 00:16:05.907635 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-13 00:16:07.098713 | orchestrator | ok: [testbed-manager] 2026-03-13 00:16:07.098754 | orchestrator | 2026-03-13 00:16:07.098761 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 00:16:07.098769 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-03-13 00:16:07.098774 | orchestrator | 2026-03-13 00:16:07.348697 | orchestrator | ok: Runtime: 0:02:21.662592 2026-03-13 00:16:07.366609 | 2026-03-13 00:16:07.366773 | TASK [Reboot manager] 2026-03-13 00:16:08.910605 | orchestrator | ok: Runtime: 0:00:00.945681 2026-03-13 00:16:08.928331 | 2026-03-13 00:16:08.928479 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-13 00:16:23.498925 | orchestrator | ok 2026-03-13 00:16:23.509820 | 2026-03-13 00:16:23.509958 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-13 00:17:23.560902 | orchestrator | ok 2026-03-13 00:17:23.570513 | 2026-03-13 00:17:23.570652 | TASK [Deploy manager + bootstrap nodes] 2026-03-13 00:17:25.799158 | orchestrator | 2026-03-13 00:17:25.799459 | orchestrator | # DEPLOY MANAGER 2026-03-13 00:17:25.799488 | orchestrator | 2026-03-13 00:17:25.799504 | orchestrator | + set -e 2026-03-13 00:17:25.799517 | orchestrator | + echo 2026-03-13 00:17:25.799531 | orchestrator | + echo '# DEPLOY MANAGER' 2026-03-13 00:17:25.799550 | orchestrator | + echo 2026-03-13 00:17:25.799593 | orchestrator | + cat /opt/manager-vars.sh 2026-03-13 00:17:25.801895 | orchestrator | export NUMBER_OF_NODES=6 2026-03-13 00:17:25.801931 | orchestrator | 2026-03-13 00:17:25.801945 | orchestrator | export CEPH_VERSION=reef 2026-03-13 00:17:25.801960 | orchestrator | export CONFIGURATION_VERSION=main 2026-03-13 00:17:25.801973 | orchestrator | export MANAGER_VERSION=latest 2026-03-13 00:17:25.801996 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-03-13 00:17:25.802007 | orchestrator | 2026-03-13 00:17:25.802073 | orchestrator | export ARA=false 2026-03-13 00:17:25.802088 | orchestrator | export DEPLOY_MODE=manager 2026-03-13 00:17:25.802105 | orchestrator | export TEMPEST=true 2026-03-13 00:17:25.802117 | orchestrator | export IS_ZUUL=true 2026-03-13 00:17:25.802128 | orchestrator | 2026-03-13 00:17:25.802146 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.64 2026-03-13 00:17:25.802157 | orchestrator | export EXTERNAL_API=false 2026-03-13 00:17:25.802168 | orchestrator | 2026-03-13 00:17:25.802179 | orchestrator | export IMAGE_USER=ubuntu 2026-03-13 00:17:25.802193 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-03-13 00:17:25.802204 | orchestrator | 2026-03-13 00:17:25.802215 | orchestrator | export CEPH_STACK=ceph-ansible 2026-03-13 00:17:25.802234 | orchestrator | 2026-03-13 00:17:25.802245 | orchestrator | + echo 2026-03-13 00:17:25.802258 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-13 00:17:25.803019 | orchestrator | ++ export INTERACTIVE=false 2026-03-13 00:17:25.803040 | orchestrator | ++ INTERACTIVE=false 2026-03-13 00:17:25.803054 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-13 00:17:25.803066 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-13 00:17:25.803286 | orchestrator | + source /opt/manager-vars.sh 2026-03-13 00:17:25.803341 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-13 00:17:25.803355 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-13 00:17:25.803366 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-13 00:17:25.803377 | orchestrator | ++ CEPH_VERSION=reef 2026-03-13 00:17:25.803388 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-13 00:17:25.803406 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-13 00:17:25.803441 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-13 00:17:25.803453 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-13 00:17:25.803583 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-13 00:17:25.803609 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-13 00:17:25.803620 | orchestrator | ++ export ARA=false 2026-03-13 00:17:25.803631 | orchestrator | ++ ARA=false 2026-03-13 00:17:25.803649 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-13 00:17:25.803660 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-13 00:17:25.803670 | orchestrator | ++ export TEMPEST=true 2026-03-13 00:17:25.803681 | orchestrator | ++ TEMPEST=true 2026-03-13 00:17:25.803692 | orchestrator | ++ export IS_ZUUL=true 2026-03-13 00:17:25.803703 | orchestrator | ++ IS_ZUUL=true 2026-03-13 00:17:25.803714 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.64 2026-03-13 00:17:25.803724 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.64 2026-03-13 00:17:25.803739 | orchestrator | ++ export EXTERNAL_API=false 2026-03-13 00:17:25.803751 | orchestrator | ++ EXTERNAL_API=false 2026-03-13 00:17:25.803761 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-13 00:17:25.803778 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-13 00:17:25.803789 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-13 00:17:25.803800 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-13 00:17:25.803814 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-13 00:17:25.803826 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-13 00:17:25.803837 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-03-13 00:17:25.853796 | orchestrator | + docker version 2026-03-13 00:17:25.939907 | orchestrator | Client: Docker Engine - Community 2026-03-13 00:17:25.940022 | orchestrator | Version: 27.5.1 2026-03-13 00:17:25.940048 | orchestrator | API version: 1.47 2026-03-13 00:17:25.940071 | orchestrator | Go version: go1.22.11 2026-03-13 00:17:25.940101 | orchestrator | Git commit: 9f9e405 2026-03-13 00:17:25.940113 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-13 00:17:25.940125 | orchestrator | OS/Arch: linux/amd64 2026-03-13 00:17:25.940136 | orchestrator | Context: default 2026-03-13 00:17:25.940147 | orchestrator | 2026-03-13 00:17:25.940158 | orchestrator | Server: Docker Engine - Community 2026-03-13 00:17:25.940169 | orchestrator | Engine: 2026-03-13 00:17:25.940181 | orchestrator | Version: 27.5.1 2026-03-13 00:17:25.940192 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-03-13 00:17:25.940234 | orchestrator | Go version: go1.22.11 2026-03-13 00:17:25.940246 | orchestrator | Git commit: 4c9b3b0 2026-03-13 00:17:25.940257 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-13 00:17:25.940268 | orchestrator | OS/Arch: linux/amd64 2026-03-13 00:17:25.940279 | orchestrator | Experimental: false 2026-03-13 00:17:25.940290 | orchestrator | containerd: 2026-03-13 00:17:25.940300 | orchestrator | Version: v2.2.2 2026-03-13 00:17:25.940326 | orchestrator | GitCommit: 301b2dac98f15c27117da5c8af12118a041a31d9 2026-03-13 00:17:25.940338 | orchestrator | runc: 2026-03-13 00:17:25.940349 | orchestrator | Version: 1.3.4 2026-03-13 00:17:25.940360 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-03-13 00:17:25.940371 | orchestrator | docker-init: 2026-03-13 00:17:25.940381 | orchestrator | Version: 0.19.0 2026-03-13 00:17:25.940393 | orchestrator | GitCommit: de40ad0 2026-03-13 00:17:25.942744 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-03-13 00:17:25.951354 | orchestrator | + set -e 2026-03-13 00:17:25.951473 | orchestrator | + source /opt/manager-vars.sh 2026-03-13 00:17:25.951513 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-13 00:17:25.951548 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-13 00:17:25.951570 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-13 00:17:25.951589 | orchestrator | ++ CEPH_VERSION=reef 2026-03-13 00:17:25.951632 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-13 00:17:25.951651 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-13 00:17:25.951669 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-13 00:17:25.951689 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-13 00:17:25.951708 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-13 00:17:25.951725 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-13 00:17:25.951743 | orchestrator | ++ export ARA=false 2026-03-13 00:17:25.951762 | orchestrator | ++ ARA=false 2026-03-13 00:17:25.951781 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-13 00:17:25.951802 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-13 00:17:25.951821 | orchestrator | ++ export TEMPEST=true 2026-03-13 00:17:25.951857 | orchestrator | ++ TEMPEST=true 2026-03-13 00:17:25.951884 | orchestrator | ++ export IS_ZUUL=true 2026-03-13 00:17:25.951902 | orchestrator | ++ IS_ZUUL=true 2026-03-13 00:17:25.951920 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.64 2026-03-13 00:17:25.951938 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.64 2026-03-13 00:17:25.951956 | orchestrator | ++ export EXTERNAL_API=false 2026-03-13 00:17:25.951981 | orchestrator | ++ EXTERNAL_API=false 2026-03-13 00:17:25.952004 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-13 00:17:25.952021 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-13 00:17:25.952038 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-13 00:17:25.952055 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-13 00:17:25.952075 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-13 00:17:25.952094 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-13 00:17:25.952112 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-13 00:17:25.952141 | orchestrator | ++ export INTERACTIVE=false 2026-03-13 00:17:25.952153 | orchestrator | ++ INTERACTIVE=false 2026-03-13 00:17:25.952164 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-13 00:17:25.952179 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-13 00:17:25.952190 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-13 00:17:25.952201 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-13 00:17:25.952212 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2026-03-13 00:17:25.957456 | orchestrator | + set -e 2026-03-13 00:17:25.957510 | orchestrator | + VERSION=reef 2026-03-13 00:17:25.958055 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2026-03-13 00:17:25.963395 | orchestrator | + [[ -n ceph_version: reef ]] 2026-03-13 00:17:25.963545 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2026-03-13 00:17:25.967870 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2026-03-13 00:17:25.974246 | orchestrator | + set -e 2026-03-13 00:17:25.974292 | orchestrator | + VERSION=2024.2 2026-03-13 00:17:25.975120 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2026-03-13 00:17:25.978623 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2026-03-13 00:17:25.978679 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2026-03-13 00:17:25.981782 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-03-13 00:17:25.982145 | orchestrator | ++ semver latest 7.0.0 2026-03-13 00:17:26.029710 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-13 00:17:26.029793 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-13 00:17:26.029807 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-03-13 00:17:26.030305 | orchestrator | ++ semver latest 10.0.0-0 2026-03-13 00:17:26.083018 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-13 00:17:26.083720 | orchestrator | ++ semver 2024.2 2025.1 2026-03-13 00:17:26.141360 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-13 00:17:26.141467 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-03-13 00:17:26.234179 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-13 00:17:26.236705 | orchestrator | + source /opt/venv/bin/activate 2026-03-13 00:17:26.238149 | orchestrator | ++ deactivate nondestructive 2026-03-13 00:17:26.238189 | orchestrator | ++ '[' -n '' ']' 2026-03-13 00:17:26.238201 | orchestrator | ++ '[' -n '' ']' 2026-03-13 00:17:26.238212 | orchestrator | ++ hash -r 2026-03-13 00:17:26.238222 | orchestrator | ++ '[' -n '' ']' 2026-03-13 00:17:26.238233 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-13 00:17:26.238244 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-13 00:17:26.238258 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-13 00:17:26.238275 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-13 00:17:26.238295 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-13 00:17:26.238306 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-13 00:17:26.238316 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-13 00:17:26.238328 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-13 00:17:26.238344 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-13 00:17:26.238355 | orchestrator | ++ export PATH 2026-03-13 00:17:26.238366 | orchestrator | ++ '[' -n '' ']' 2026-03-13 00:17:26.238525 | orchestrator | ++ '[' -z '' ']' 2026-03-13 00:17:26.238560 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-13 00:17:26.238579 | orchestrator | ++ PS1='(venv) ' 2026-03-13 00:17:26.238598 | orchestrator | ++ export PS1 2026-03-13 00:17:26.238616 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-13 00:17:26.238640 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-13 00:17:26.238666 | orchestrator | ++ hash -r 2026-03-13 00:17:26.238799 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-03-13 00:17:27.295571 | orchestrator | 2026-03-13 00:17:27.295678 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-03-13 00:17:27.295713 | orchestrator | 2026-03-13 00:17:27.295727 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-13 00:17:27.826805 | orchestrator | ok: [testbed-manager] 2026-03-13 00:17:27.826905 | orchestrator | 2026-03-13 00:17:27.826920 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-13 00:17:28.680190 | orchestrator | changed: [testbed-manager] 2026-03-13 00:17:28.680276 | orchestrator | 2026-03-13 00:17:28.680287 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-03-13 00:17:28.680298 | orchestrator | 2026-03-13 00:17:28.680306 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-13 00:17:30.892939 | orchestrator | ok: [testbed-manager] 2026-03-13 00:17:30.893050 | orchestrator | 2026-03-13 00:17:30.893069 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-03-13 00:17:30.946873 | orchestrator | ok: [testbed-manager] 2026-03-13 00:17:30.946967 | orchestrator | 2026-03-13 00:17:30.946983 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-03-13 00:17:31.389067 | orchestrator | changed: [testbed-manager] 2026-03-13 00:17:31.389167 | orchestrator | 2026-03-13 00:17:31.389183 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-03-13 00:17:31.435632 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:17:31.435743 | orchestrator | 2026-03-13 00:17:31.435767 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-13 00:17:31.781894 | orchestrator | changed: [testbed-manager] 2026-03-13 00:17:31.781997 | orchestrator | 2026-03-13 00:17:31.782014 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-03-13 00:17:32.110835 | orchestrator | ok: [testbed-manager] 2026-03-13 00:17:32.110930 | orchestrator | 2026-03-13 00:17:32.110946 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-03-13 00:17:32.242386 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:17:32.242511 | orchestrator | 2026-03-13 00:17:32.242527 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-03-13 00:17:32.242539 | orchestrator | 2026-03-13 00:17:32.242551 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-13 00:17:33.929637 | orchestrator | ok: [testbed-manager] 2026-03-13 00:17:33.929733 | orchestrator | 2026-03-13 00:17:33.929749 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-03-13 00:17:34.040646 | orchestrator | included: osism.services.traefik for testbed-manager 2026-03-13 00:17:34.040756 | orchestrator | 2026-03-13 00:17:34.040778 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-03-13 00:17:34.095208 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-03-13 00:17:34.095286 | orchestrator | 2026-03-13 00:17:34.095297 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-03-13 00:17:35.139354 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-03-13 00:17:35.139488 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-03-13 00:17:35.139504 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-03-13 00:17:35.139514 | orchestrator | 2026-03-13 00:17:35.139525 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-03-13 00:17:36.904877 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-03-13 00:17:36.904984 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-03-13 00:17:36.904999 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-03-13 00:17:36.905012 | orchestrator | 2026-03-13 00:17:36.905024 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-03-13 00:17:37.519687 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-13 00:17:37.519784 | orchestrator | changed: [testbed-manager] 2026-03-13 00:17:37.519800 | orchestrator | 2026-03-13 00:17:37.519812 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-03-13 00:17:38.156859 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-13 00:17:38.156996 | orchestrator | changed: [testbed-manager] 2026-03-13 00:17:38.157015 | orchestrator | 2026-03-13 00:17:38.157028 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-03-13 00:17:38.208150 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:17:38.208259 | orchestrator | 2026-03-13 00:17:38.208284 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-03-13 00:17:38.563409 | orchestrator | ok: [testbed-manager] 2026-03-13 00:17:38.563539 | orchestrator | 2026-03-13 00:17:38.563557 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-03-13 00:17:38.629566 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-03-13 00:17:38.629660 | orchestrator | 2026-03-13 00:17:38.629682 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-03-13 00:17:39.649775 | orchestrator | changed: [testbed-manager] 2026-03-13 00:17:39.649873 | orchestrator | 2026-03-13 00:17:39.649889 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-03-13 00:17:40.430983 | orchestrator | changed: [testbed-manager] 2026-03-13 00:17:40.431070 | orchestrator | 2026-03-13 00:17:40.431088 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-03-13 00:17:50.543482 | orchestrator | changed: [testbed-manager] 2026-03-13 00:17:50.543575 | orchestrator | 2026-03-13 00:17:50.543612 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-03-13 00:17:50.601980 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:17:50.602231 | orchestrator | 2026-03-13 00:17:50.602262 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-03-13 00:17:50.602283 | orchestrator | 2026-03-13 00:17:50.602303 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-13 00:17:52.339924 | orchestrator | ok: [testbed-manager] 2026-03-13 00:17:52.340036 | orchestrator | 2026-03-13 00:17:52.340082 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-03-13 00:17:52.439585 | orchestrator | included: osism.services.manager for testbed-manager 2026-03-13 00:17:52.439678 | orchestrator | 2026-03-13 00:17:52.439693 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-03-13 00:17:52.491307 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-03-13 00:17:52.491481 | orchestrator | 2026-03-13 00:17:52.491512 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-03-13 00:17:54.462996 | orchestrator | ok: [testbed-manager] 2026-03-13 00:17:54.463129 | orchestrator | 2026-03-13 00:17:54.463156 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-03-13 00:17:54.512958 | orchestrator | ok: [testbed-manager] 2026-03-13 00:17:54.513052 | orchestrator | 2026-03-13 00:17:54.513067 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-03-13 00:17:54.622492 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-03-13 00:17:54.622607 | orchestrator | 2026-03-13 00:17:54.622626 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-03-13 00:17:57.189591 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-03-13 00:17:57.189701 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-03-13 00:17:57.189717 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-03-13 00:17:57.189731 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-03-13 00:17:57.189742 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-03-13 00:17:57.189754 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-03-13 00:17:57.189765 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-03-13 00:17:57.189776 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-03-13 00:17:57.189788 | orchestrator | 2026-03-13 00:17:57.189800 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-03-13 00:17:57.747312 | orchestrator | changed: [testbed-manager] 2026-03-13 00:17:57.747447 | orchestrator | 2026-03-13 00:17:57.747465 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-03-13 00:17:58.316832 | orchestrator | changed: [testbed-manager] 2026-03-13 00:17:58.316932 | orchestrator | 2026-03-13 00:17:58.316948 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-03-13 00:17:58.386166 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-03-13 00:17:58.386263 | orchestrator | 2026-03-13 00:17:58.386280 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-03-13 00:17:59.458058 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-03-13 00:17:59.458157 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-03-13 00:17:59.458172 | orchestrator | 2026-03-13 00:17:59.458185 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-03-13 00:18:00.083453 | orchestrator | changed: [testbed-manager] 2026-03-13 00:18:00.083563 | orchestrator | 2026-03-13 00:18:00.083583 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-03-13 00:18:00.128140 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:18:00.128222 | orchestrator | 2026-03-13 00:18:00.128237 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-03-13 00:18:00.211328 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-03-13 00:18:00.212288 | orchestrator | 2026-03-13 00:18:00.212348 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-03-13 00:18:00.830979 | orchestrator | changed: [testbed-manager] 2026-03-13 00:18:00.831099 | orchestrator | 2026-03-13 00:18:00.831123 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-03-13 00:18:00.883053 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-03-13 00:18:00.883192 | orchestrator | 2026-03-13 00:18:00.883215 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-03-13 00:18:02.134764 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-13 00:18:02.134860 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-13 00:18:02.134876 | orchestrator | changed: [testbed-manager] 2026-03-13 00:18:02.134889 | orchestrator | 2026-03-13 00:18:02.134904 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-03-13 00:18:02.722426 | orchestrator | changed: [testbed-manager] 2026-03-13 00:18:02.722506 | orchestrator | 2026-03-13 00:18:02.722518 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-03-13 00:18:02.784117 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:18:02.784207 | orchestrator | 2026-03-13 00:18:02.784220 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-03-13 00:18:02.871341 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-03-13 00:18:02.871487 | orchestrator | 2026-03-13 00:18:02.871512 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-03-13 00:18:03.331298 | orchestrator | changed: [testbed-manager] 2026-03-13 00:18:03.331393 | orchestrator | 2026-03-13 00:18:03.331471 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-03-13 00:18:03.699236 | orchestrator | changed: [testbed-manager] 2026-03-13 00:18:03.699333 | orchestrator | 2026-03-13 00:18:03.699349 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-03-13 00:18:04.819637 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-03-13 00:18:04.819734 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-03-13 00:18:04.819749 | orchestrator | 2026-03-13 00:18:04.819762 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-03-13 00:18:05.393824 | orchestrator | changed: [testbed-manager] 2026-03-13 00:18:05.393929 | orchestrator | 2026-03-13 00:18:05.393946 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-03-13 00:18:05.744857 | orchestrator | ok: [testbed-manager] 2026-03-13 00:18:05.744958 | orchestrator | 2026-03-13 00:18:05.744975 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-03-13 00:18:06.078281 | orchestrator | changed: [testbed-manager] 2026-03-13 00:18:06.078379 | orchestrator | 2026-03-13 00:18:06.078423 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-03-13 00:18:06.124775 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:18:06.124869 | orchestrator | 2026-03-13 00:18:06.124884 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-03-13 00:18:06.194633 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-03-13 00:18:06.194734 | orchestrator | 2026-03-13 00:18:06.194750 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-03-13 00:18:06.238094 | orchestrator | ok: [testbed-manager] 2026-03-13 00:18:06.238176 | orchestrator | 2026-03-13 00:18:06.238192 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-03-13 00:18:08.263349 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-03-13 00:18:08.263497 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-03-13 00:18:08.263514 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-03-13 00:18:08.263527 | orchestrator | 2026-03-13 00:18:08.263539 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-03-13 00:18:08.958688 | orchestrator | changed: [testbed-manager] 2026-03-13 00:18:08.958787 | orchestrator | 2026-03-13 00:18:08.958802 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-03-13 00:18:09.654346 | orchestrator | changed: [testbed-manager] 2026-03-13 00:18:09.654475 | orchestrator | 2026-03-13 00:18:09.654492 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-03-13 00:18:10.325290 | orchestrator | changed: [testbed-manager] 2026-03-13 00:18:10.326328 | orchestrator | 2026-03-13 00:18:10.326369 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-03-13 00:18:10.402069 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-03-13 00:18:10.402172 | orchestrator | 2026-03-13 00:18:10.402189 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-03-13 00:18:10.439228 | orchestrator | ok: [testbed-manager] 2026-03-13 00:18:10.439320 | orchestrator | 2026-03-13 00:18:10.439334 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-03-13 00:18:11.135564 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-03-13 00:18:11.135664 | orchestrator | 2026-03-13 00:18:11.135680 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-03-13 00:18:11.223683 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-03-13 00:18:11.223774 | orchestrator | 2026-03-13 00:18:11.223788 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-03-13 00:18:11.881722 | orchestrator | changed: [testbed-manager] 2026-03-13 00:18:11.881824 | orchestrator | 2026-03-13 00:18:11.881840 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-03-13 00:18:12.425623 | orchestrator | ok: [testbed-manager] 2026-03-13 00:18:12.425720 | orchestrator | 2026-03-13 00:18:12.425736 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-03-13 00:18:12.481386 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:18:12.481503 | orchestrator | 2026-03-13 00:18:12.481519 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-03-13 00:18:12.530641 | orchestrator | ok: [testbed-manager] 2026-03-13 00:18:12.530733 | orchestrator | 2026-03-13 00:18:12.530747 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-03-13 00:18:13.282621 | orchestrator | changed: [testbed-manager] 2026-03-13 00:18:13.282719 | orchestrator | 2026-03-13 00:18:13.282732 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-03-13 00:19:17.430590 | orchestrator | changed: [testbed-manager] 2026-03-13 00:19:17.430693 | orchestrator | 2026-03-13 00:19:17.430706 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-03-13 00:19:18.409542 | orchestrator | ok: [testbed-manager] 2026-03-13 00:19:18.409647 | orchestrator | 2026-03-13 00:19:18.409663 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-03-13 00:19:18.467891 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:19:18.467982 | orchestrator | 2026-03-13 00:19:18.467996 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-03-13 00:19:20.884843 | orchestrator | changed: [testbed-manager] 2026-03-13 00:19:20.884948 | orchestrator | 2026-03-13 00:19:20.884965 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-03-13 00:19:20.978777 | orchestrator | ok: [testbed-manager] 2026-03-13 00:19:20.978895 | orchestrator | 2026-03-13 00:19:20.978935 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-13 00:19:20.978948 | orchestrator | 2026-03-13 00:19:20.978959 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-03-13 00:19:21.026140 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:19:21.026236 | orchestrator | 2026-03-13 00:19:21.026256 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-03-13 00:20:21.072373 | orchestrator | Pausing for 60 seconds 2026-03-13 00:20:21.072490 | orchestrator | changed: [testbed-manager] 2026-03-13 00:20:21.072507 | orchestrator | 2026-03-13 00:20:21.072521 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-03-13 00:20:24.144490 | orchestrator | changed: [testbed-manager] 2026-03-13 00:20:24.144573 | orchestrator | 2026-03-13 00:20:24.144585 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-03-13 00:21:05.690871 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-03-13 00:21:05.690987 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-03-13 00:21:05.691004 | orchestrator | changed: [testbed-manager] 2026-03-13 00:21:05.691045 | orchestrator | 2026-03-13 00:21:05.691058 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-03-13 00:21:15.206733 | orchestrator | changed: [testbed-manager] 2026-03-13 00:21:15.206844 | orchestrator | 2026-03-13 00:21:15.206862 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-03-13 00:21:15.278521 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-03-13 00:21:15.278613 | orchestrator | 2026-03-13 00:21:15.278638 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-13 00:21:15.278657 | orchestrator | 2026-03-13 00:21:15.278670 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-03-13 00:21:15.311049 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:21:15.311142 | orchestrator | 2026-03-13 00:21:15.311156 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-03-13 00:21:15.376172 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-03-13 00:21:15.376264 | orchestrator | 2026-03-13 00:21:15.376278 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-03-13 00:21:16.085355 | orchestrator | changed: [testbed-manager] 2026-03-13 00:21:16.085467 | orchestrator | 2026-03-13 00:21:16.085484 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-03-13 00:21:19.022851 | orchestrator | ok: [testbed-manager] 2026-03-13 00:21:19.022974 | orchestrator | 2026-03-13 00:21:19.022998 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-03-13 00:21:19.089263 | orchestrator | ok: [testbed-manager] => { 2026-03-13 00:21:19.089389 | orchestrator | "version_check_result.stdout_lines": [ 2026-03-13 00:21:19.089399 | orchestrator | "=== OSISM Container Version Check ===", 2026-03-13 00:21:19.089406 | orchestrator | "Checking running containers against expected versions...", 2026-03-13 00:21:19.089414 | orchestrator | "", 2026-03-13 00:21:19.089422 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-03-13 00:21:19.089429 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2026-03-13 00:21:19.089435 | orchestrator | " Enabled: true", 2026-03-13 00:21:19.089442 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2026-03-13 00:21:19.089448 | orchestrator | " Status: ✅ MATCH", 2026-03-13 00:21:19.089454 | orchestrator | "", 2026-03-13 00:21:19.089461 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-03-13 00:21:19.089467 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2026-03-13 00:21:19.089474 | orchestrator | " Enabled: true", 2026-03-13 00:21:19.089480 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2026-03-13 00:21:19.089486 | orchestrator | " Status: ✅ MATCH", 2026-03-13 00:21:19.089492 | orchestrator | "", 2026-03-13 00:21:19.089498 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-03-13 00:21:19.089504 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2026-03-13 00:21:19.089512 | orchestrator | " Enabled: true", 2026-03-13 00:21:19.089523 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2026-03-13 00:21:19.089534 | orchestrator | " Status: ✅ MATCH", 2026-03-13 00:21:19.089543 | orchestrator | "", 2026-03-13 00:21:19.089553 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-03-13 00:21:19.089565 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2026-03-13 00:21:19.089577 | orchestrator | " Enabled: true", 2026-03-13 00:21:19.089589 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2026-03-13 00:21:19.089596 | orchestrator | " Status: ✅ MATCH", 2026-03-13 00:21:19.089602 | orchestrator | "", 2026-03-13 00:21:19.089608 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-03-13 00:21:19.089614 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-03-13 00:21:19.089636 | orchestrator | " Enabled: true", 2026-03-13 00:21:19.089642 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-03-13 00:21:19.089648 | orchestrator | " Status: ✅ MATCH", 2026-03-13 00:21:19.089654 | orchestrator | "", 2026-03-13 00:21:19.089661 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-03-13 00:21:19.089667 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-13 00:21:19.089673 | orchestrator | " Enabled: true", 2026-03-13 00:21:19.089680 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-13 00:21:19.089686 | orchestrator | " Status: ✅ MATCH", 2026-03-13 00:21:19.089692 | orchestrator | "", 2026-03-13 00:21:19.089698 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-03-13 00:21:19.089705 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-13 00:21:19.089711 | orchestrator | " Enabled: true", 2026-03-13 00:21:19.089717 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-13 00:21:19.089723 | orchestrator | " Status: ✅ MATCH", 2026-03-13 00:21:19.089729 | orchestrator | "", 2026-03-13 00:21:19.089735 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-03-13 00:21:19.089742 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-13 00:21:19.089748 | orchestrator | " Enabled: true", 2026-03-13 00:21:19.089754 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-13 00:21:19.089760 | orchestrator | " Status: ✅ MATCH", 2026-03-13 00:21:19.089766 | orchestrator | "", 2026-03-13 00:21:19.089778 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-03-13 00:21:19.089784 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2026-03-13 00:21:19.089793 | orchestrator | " Enabled: true", 2026-03-13 00:21:19.089800 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2026-03-13 00:21:19.089807 | orchestrator | " Status: ✅ MATCH", 2026-03-13 00:21:19.089815 | orchestrator | "", 2026-03-13 00:21:19.089822 | orchestrator | "Checking service: redis (Redis Cache)", 2026-03-13 00:21:19.089829 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-13 00:21:19.089836 | orchestrator | " Enabled: true", 2026-03-13 00:21:19.089843 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-13 00:21:19.089851 | orchestrator | " Status: ✅ MATCH", 2026-03-13 00:21:19.089858 | orchestrator | "", 2026-03-13 00:21:19.089865 | orchestrator | "Checking service: api (OSISM API Service)", 2026-03-13 00:21:19.089872 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-13 00:21:19.089879 | orchestrator | " Enabled: true", 2026-03-13 00:21:19.089886 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-13 00:21:19.089894 | orchestrator | " Status: ✅ MATCH", 2026-03-13 00:21:19.089901 | orchestrator | "", 2026-03-13 00:21:19.089908 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-03-13 00:21:19.089915 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-13 00:21:19.089922 | orchestrator | " Enabled: true", 2026-03-13 00:21:19.089929 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-13 00:21:19.089936 | orchestrator | " Status: ✅ MATCH", 2026-03-13 00:21:19.089943 | orchestrator | "", 2026-03-13 00:21:19.089950 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-03-13 00:21:19.089958 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-13 00:21:19.089965 | orchestrator | " Enabled: true", 2026-03-13 00:21:19.089972 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-13 00:21:19.089979 | orchestrator | " Status: ✅ MATCH", 2026-03-13 00:21:19.089987 | orchestrator | "", 2026-03-13 00:21:19.089994 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-03-13 00:21:19.090001 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-13 00:21:19.090009 | orchestrator | " Enabled: true", 2026-03-13 00:21:19.090055 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-13 00:21:19.090062 | orchestrator | " Status: ✅ MATCH", 2026-03-13 00:21:19.090073 | orchestrator | "", 2026-03-13 00:21:19.090079 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-03-13 00:21:19.090100 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-13 00:21:19.090107 | orchestrator | " Enabled: true", 2026-03-13 00:21:19.090113 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-13 00:21:19.090119 | orchestrator | " Status: ✅ MATCH", 2026-03-13 00:21:19.090125 | orchestrator | "", 2026-03-13 00:21:19.090132 | orchestrator | "=== Summary ===", 2026-03-13 00:21:19.090138 | orchestrator | "Errors (version mismatches): 0", 2026-03-13 00:21:19.090144 | orchestrator | "Warnings (expected containers not running): 0", 2026-03-13 00:21:19.090150 | orchestrator | "", 2026-03-13 00:21:19.090157 | orchestrator | "✅ All running containers match expected versions!" 2026-03-13 00:21:19.090163 | orchestrator | ] 2026-03-13 00:21:19.090169 | orchestrator | } 2026-03-13 00:21:19.090175 | orchestrator | 2026-03-13 00:21:19.090182 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-03-13 00:21:19.146036 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:21:19.146102 | orchestrator | 2026-03-13 00:21:19.146111 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 00:21:19.146119 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-03-13 00:21:19.146126 | orchestrator | 2026-03-13 00:21:19.207857 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-13 00:21:19.207943 | orchestrator | + deactivate 2026-03-13 00:21:19.207958 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-13 00:21:19.208009 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-13 00:21:19.208023 | orchestrator | + export PATH 2026-03-13 00:21:19.208035 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-13 00:21:19.208046 | orchestrator | + '[' -n '' ']' 2026-03-13 00:21:19.208057 | orchestrator | + hash -r 2026-03-13 00:21:19.208068 | orchestrator | + '[' -n '' ']' 2026-03-13 00:21:19.208078 | orchestrator | + unset VIRTUAL_ENV 2026-03-13 00:21:19.208089 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-13 00:21:19.208202 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-13 00:21:19.208218 | orchestrator | + unset -f deactivate 2026-03-13 00:21:19.208230 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-03-13 00:21:19.213464 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-13 00:21:19.213510 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-13 00:21:19.213522 | orchestrator | + local max_attempts=60 2026-03-13 00:21:19.213533 | orchestrator | + local name=ceph-ansible 2026-03-13 00:21:19.213545 | orchestrator | + local attempt_num=1 2026-03-13 00:21:19.214697 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-13 00:21:19.241843 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-13 00:21:19.241956 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-13 00:21:19.241981 | orchestrator | + local max_attempts=60 2026-03-13 00:21:19.242001 | orchestrator | + local name=kolla-ansible 2026-03-13 00:21:19.242097 | orchestrator | + local attempt_num=1 2026-03-13 00:21:19.242252 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-13 00:21:19.272443 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-13 00:21:19.272530 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-13 00:21:19.272543 | orchestrator | + local max_attempts=60 2026-03-13 00:21:19.272555 | orchestrator | + local name=osism-ansible 2026-03-13 00:21:19.272566 | orchestrator | + local attempt_num=1 2026-03-13 00:21:19.272906 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-13 00:21:19.298786 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-13 00:21:19.298881 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-13 00:21:19.298903 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-13 00:21:19.949383 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-03-13 00:21:20.125278 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-03-13 00:21:20.125441 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2026-03-13 00:21:20.125457 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2026-03-13 00:21:20.125469 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2026-03-13 00:21:20.125482 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2026-03-13 00:21:20.125492 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2026-03-13 00:21:20.125503 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2026-03-13 00:21:20.125514 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 56 seconds (healthy) 2026-03-13 00:21:20.125536 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2026-03-13 00:21:20.125547 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2026-03-13 00:21:20.125558 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2026-03-13 00:21:20.125568 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2026-03-13 00:21:20.125579 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2026-03-13 00:21:20.125590 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend About a minute ago Up About a minute 192.168.16.5:3000->3000/tcp 2026-03-13 00:21:20.125600 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2026-03-13 00:21:20.125611 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2026-03-13 00:21:20.128588 | orchestrator | ++ semver latest 7.0.0 2026-03-13 00:21:20.163042 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-13 00:21:20.163124 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-13 00:21:20.163141 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-03-13 00:21:20.166275 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-03-13 00:21:31.979052 | orchestrator | 2026-03-13 00:21:31 | INFO  | Prepare task for execution of resolvconf. 2026-03-13 00:21:32.162384 | orchestrator | 2026-03-13 00:21:32 | INFO  | Task 367ef398-3ddf-4c88-807b-d5411fe20aaf (resolvconf) was prepared for execution. 2026-03-13 00:21:32.162466 | orchestrator | 2026-03-13 00:21:32 | INFO  | It takes a moment until task 367ef398-3ddf-4c88-807b-d5411fe20aaf (resolvconf) has been started and output is visible here. 2026-03-13 00:21:44.502507 | orchestrator | 2026-03-13 00:21:44.502621 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-03-13 00:21:44.502638 | orchestrator | 2026-03-13 00:21:44.502650 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-13 00:21:44.502662 | orchestrator | Friday 13 March 2026 00:21:36 +0000 (0:00:00.125) 0:00:00.125 ********** 2026-03-13 00:21:44.502673 | orchestrator | ok: [testbed-manager] 2026-03-13 00:21:44.502686 | orchestrator | 2026-03-13 00:21:44.502697 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-13 00:21:44.502709 | orchestrator | Friday 13 March 2026 00:21:39 +0000 (0:00:03.367) 0:00:03.492 ********** 2026-03-13 00:21:44.502720 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:21:44.502731 | orchestrator | 2026-03-13 00:21:44.502742 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-13 00:21:44.502753 | orchestrator | Friday 13 March 2026 00:21:39 +0000 (0:00:00.056) 0:00:03.549 ********** 2026-03-13 00:21:44.502764 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-03-13 00:21:44.502776 | orchestrator | 2026-03-13 00:21:44.502787 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-13 00:21:44.502798 | orchestrator | Friday 13 March 2026 00:21:39 +0000 (0:00:00.077) 0:00:03.626 ********** 2026-03-13 00:21:44.502819 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-03-13 00:21:44.502830 | orchestrator | 2026-03-13 00:21:44.502841 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-13 00:21:44.502852 | orchestrator | Friday 13 March 2026 00:21:39 +0000 (0:00:00.064) 0:00:03.691 ********** 2026-03-13 00:21:44.502862 | orchestrator | ok: [testbed-manager] 2026-03-13 00:21:44.502873 | orchestrator | 2026-03-13 00:21:44.502884 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-13 00:21:44.502895 | orchestrator | Friday 13 March 2026 00:21:40 +0000 (0:00:00.814) 0:00:04.505 ********** 2026-03-13 00:21:44.502906 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:21:44.502917 | orchestrator | 2026-03-13 00:21:44.502927 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-13 00:21:44.502938 | orchestrator | Friday 13 March 2026 00:21:40 +0000 (0:00:00.051) 0:00:04.556 ********** 2026-03-13 00:21:44.502949 | orchestrator | ok: [testbed-manager] 2026-03-13 00:21:44.502959 | orchestrator | 2026-03-13 00:21:44.502970 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-13 00:21:44.502981 | orchestrator | Friday 13 March 2026 00:21:40 +0000 (0:00:00.448) 0:00:05.004 ********** 2026-03-13 00:21:44.502991 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:21:44.503002 | orchestrator | 2026-03-13 00:21:44.503013 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-13 00:21:44.503025 | orchestrator | Friday 13 March 2026 00:21:41 +0000 (0:00:00.078) 0:00:05.082 ********** 2026-03-13 00:21:44.503036 | orchestrator | changed: [testbed-manager] 2026-03-13 00:21:44.503047 | orchestrator | 2026-03-13 00:21:44.503058 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-13 00:21:44.503069 | orchestrator | Friday 13 March 2026 00:21:41 +0000 (0:00:00.482) 0:00:05.565 ********** 2026-03-13 00:21:44.503079 | orchestrator | changed: [testbed-manager] 2026-03-13 00:21:44.503090 | orchestrator | 2026-03-13 00:21:44.503101 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-13 00:21:44.503111 | orchestrator | Friday 13 March 2026 00:21:42 +0000 (0:00:00.945) 0:00:06.510 ********** 2026-03-13 00:21:44.503122 | orchestrator | ok: [testbed-manager] 2026-03-13 00:21:44.503133 | orchestrator | 2026-03-13 00:21:44.503167 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-13 00:21:44.503179 | orchestrator | Friday 13 March 2026 00:21:43 +0000 (0:00:00.830) 0:00:07.340 ********** 2026-03-13 00:21:44.503189 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-03-13 00:21:44.503200 | orchestrator | 2026-03-13 00:21:44.503211 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-13 00:21:44.503222 | orchestrator | Friday 13 March 2026 00:21:43 +0000 (0:00:00.081) 0:00:07.422 ********** 2026-03-13 00:21:44.503232 | orchestrator | changed: [testbed-manager] 2026-03-13 00:21:44.503243 | orchestrator | 2026-03-13 00:21:44.503253 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 00:21:44.503265 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-13 00:21:44.503276 | orchestrator | 2026-03-13 00:21:44.503312 | orchestrator | 2026-03-13 00:21:44.503323 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 00:21:44.503334 | orchestrator | Friday 13 March 2026 00:21:44 +0000 (0:00:01.005) 0:00:08.427 ********** 2026-03-13 00:21:44.503344 | orchestrator | =============================================================================== 2026-03-13 00:21:44.503355 | orchestrator | Gathering Facts --------------------------------------------------------- 3.37s 2026-03-13 00:21:44.503365 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.01s 2026-03-13 00:21:44.503376 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 0.95s 2026-03-13 00:21:44.503386 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.83s 2026-03-13 00:21:44.503397 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 0.81s 2026-03-13 00:21:44.503408 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.48s 2026-03-13 00:21:44.503435 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.45s 2026-03-13 00:21:44.503447 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2026-03-13 00:21:44.503458 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2026-03-13 00:21:44.503468 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2026-03-13 00:21:44.503479 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.06s 2026-03-13 00:21:44.503490 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2026-03-13 00:21:44.503500 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.05s 2026-03-13 00:21:44.687640 | orchestrator | + osism apply sshconfig 2026-03-13 00:21:56.450367 | orchestrator | 2026-03-13 00:21:56 | INFO  | Prepare task for execution of sshconfig. 2026-03-13 00:21:56.520455 | orchestrator | 2026-03-13 00:21:56 | INFO  | Task f8978327-498d-43b4-9f29-b2e55ffb3011 (sshconfig) was prepared for execution. 2026-03-13 00:21:56.520572 | orchestrator | 2026-03-13 00:21:56 | INFO  | It takes a moment until task f8978327-498d-43b4-9f29-b2e55ffb3011 (sshconfig) has been started and output is visible here. 2026-03-13 00:22:07.361315 | orchestrator | 2026-03-13 00:22:07.362261 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-03-13 00:22:07.362395 | orchestrator | 2026-03-13 00:22:07.362422 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-03-13 00:22:07.362444 | orchestrator | Friday 13 March 2026 00:22:00 +0000 (0:00:00.116) 0:00:00.116 ********** 2026-03-13 00:22:07.362464 | orchestrator | ok: [testbed-manager] 2026-03-13 00:22:07.362477 | orchestrator | 2026-03-13 00:22:07.362488 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-03-13 00:22:07.362499 | orchestrator | Friday 13 March 2026 00:22:00 +0000 (0:00:00.466) 0:00:00.583 ********** 2026-03-13 00:22:07.362544 | orchestrator | changed: [testbed-manager] 2026-03-13 00:22:07.362557 | orchestrator | 2026-03-13 00:22:07.362568 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-03-13 00:22:07.362580 | orchestrator | Friday 13 March 2026 00:22:01 +0000 (0:00:00.416) 0:00:01.000 ********** 2026-03-13 00:22:07.362591 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-03-13 00:22:07.362603 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-03-13 00:22:07.362614 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-03-13 00:22:07.362625 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-03-13 00:22:07.362635 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-03-13 00:22:07.362646 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-03-13 00:22:07.362657 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-03-13 00:22:07.362669 | orchestrator | 2026-03-13 00:22:07.362687 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-03-13 00:22:07.362705 | orchestrator | Friday 13 March 2026 00:22:06 +0000 (0:00:05.274) 0:00:06.274 ********** 2026-03-13 00:22:07.362716 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:22:07.362727 | orchestrator | 2026-03-13 00:22:07.362737 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-03-13 00:22:07.362748 | orchestrator | Friday 13 March 2026 00:22:06 +0000 (0:00:00.077) 0:00:06.352 ********** 2026-03-13 00:22:07.362759 | orchestrator | changed: [testbed-manager] 2026-03-13 00:22:07.362769 | orchestrator | 2026-03-13 00:22:07.362781 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 00:22:07.362793 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-13 00:22:07.362805 | orchestrator | 2026-03-13 00:22:07.362816 | orchestrator | 2026-03-13 00:22:07.362827 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 00:22:07.362843 | orchestrator | Friday 13 March 2026 00:22:07 +0000 (0:00:00.567) 0:00:06.920 ********** 2026-03-13 00:22:07.362862 | orchestrator | =============================================================================== 2026-03-13 00:22:07.362873 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.27s 2026-03-13 00:22:07.362884 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.57s 2026-03-13 00:22:07.362895 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.47s 2026-03-13 00:22:07.362906 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.42s 2026-03-13 00:22:07.362917 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.08s 2026-03-13 00:22:07.655625 | orchestrator | + osism apply known-hosts 2026-03-13 00:22:19.683690 | orchestrator | 2026-03-13 00:22:19 | INFO  | Prepare task for execution of known-hosts. 2026-03-13 00:22:19.751068 | orchestrator | 2026-03-13 00:22:19 | INFO  | Task aaf0e49d-490d-4a37-b131-9edaa966ff81 (known-hosts) was prepared for execution. 2026-03-13 00:22:19.751162 | orchestrator | 2026-03-13 00:22:19 | INFO  | It takes a moment until task aaf0e49d-490d-4a37-b131-9edaa966ff81 (known-hosts) has been started and output is visible here. 2026-03-13 00:22:34.429854 | orchestrator | 2026-03-13 00:22:34.429973 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-03-13 00:22:34.430001 | orchestrator | 2026-03-13 00:22:34.430102 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-03-13 00:22:34.430127 | orchestrator | Friday 13 March 2026 00:22:23 +0000 (0:00:00.152) 0:00:00.152 ********** 2026-03-13 00:22:34.430148 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-13 00:22:34.430172 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-13 00:22:34.430193 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-13 00:22:34.430248 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-13 00:22:34.430299 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-13 00:22:34.430321 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-13 00:22:34.430342 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-13 00:22:34.430364 | orchestrator | 2026-03-13 00:22:34.430385 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-03-13 00:22:34.430409 | orchestrator | Friday 13 March 2026 00:22:29 +0000 (0:00:05.572) 0:00:05.724 ********** 2026-03-13 00:22:34.430447 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-13 00:22:34.430472 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-13 00:22:34.430494 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-13 00:22:34.430516 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-13 00:22:34.430536 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-13 00:22:34.430557 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-13 00:22:34.430578 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-13 00:22:34.430600 | orchestrator | 2026-03-13 00:22:34.430621 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-13 00:22:34.430644 | orchestrator | Friday 13 March 2026 00:22:29 +0000 (0:00:00.152) 0:00:05.877 ********** 2026-03-13 00:22:34.430669 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINEHvuAZPxmqZ9qaQKaBtMyCTKys+UAHrwSqBD/1tVr4) 2026-03-13 00:22:34.430696 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDzBEngU2eyveT0OSb9MwWFHKJLU4TWMdFyrsvYzPje8rHVxFeSh/sRwJqukq1fXC/rka5Stfc6eTJViiN4GqrTTq8MWSE1soA+8vHzjC9T/bT5BNfhFnUW+10BA2UNOuuiEuGXdQhiYRKtLo3YIGurlcKTr7Wz5OBVHv1u7PZ6ukCY720TPimza8DC+JTOyYDH4EiJUWcoRSaz0WTORhzoxFHVh042QXfSnz2Wjhc9AOq50whynDWW1pu05ucFuGUCuiPLHWp4/nAqYspd0RRJBAQPvaLsrBQPjAdtRtozCcoZY1mIO0rg9msZQ8fdikk3/qrThQHHyMqNM9EA+TrfzucinENMtYjCU2RArkKIaViN9Q79pdmq+2T2lNii2R2vRh23t1VnyLCPnVtrykwEpns++mjRbXoJryeRT/X4u9lY/Q1ajO3w4nFVbOf9+m63QSHmKRj/x6c7mcT41+8SMnunKJfBtXZ5wJo8tNw2LoKoLLNMMPv5PLcFRHLvy70=) 2026-03-13 00:22:34.430723 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOpp4iYIYESTOm4AITZGsNwi6H2W/2Tc5DCSMx0UJ6PjM9SgDhm/9+utmu1uLSwNqBEl8b16EBPBSkcEdgjF6vI=) 2026-03-13 00:22:34.430744 | orchestrator | 2026-03-13 00:22:34.430766 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-13 00:22:34.430787 | orchestrator | Friday 13 March 2026 00:22:30 +0000 (0:00:01.055) 0:00:06.932 ********** 2026-03-13 00:22:34.430846 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCfgLHnH5FfEUJfOesAwBx/Atx5dq+BRZWwRMl8cBEzCDnI1St7oI9sAwXG3Tp1tv697SbEbDeoH0UdTXDwv+DbJQDqHDCaG5q/czfO2yJs+gGXcXOM2R9icbsEtnYHIsuE8C+JMfd1vyvc07m0eE4a7L5aDWPCzcPdxbgVdUqkUca1nvBNZXKSM8zX6bjVt+16O577EXZAvgVss+pkpb5gvrXAvz3Q4UERxGwaBds92jmF9uSdWbgtV8rjEqXirX6ogN9Ao0cdEU2AoN8GT7uxg8Hd5vLGTUoQuMcRDu70UJEbJm1jFfxafDlVQqmRo6M8VGRIQvGBacXAiI3obkfFeEmaMXzLGDbODWbtwAO49UG6XZAKJzXmqPIm33nc6BaKzaepiHSa/rJ5Dzn08quJUMnDdFto2d3JtaesVB3nljiNLuS2KEc//kMfyPUKjhOL6bO3xYTUHFvWfiMhQ3lxhzVaAaHjd2jgVPON3dkEt2tSldIb6hWFLnDZCAWPv20=) 2026-03-13 00:22:34.430885 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCls70OVy03sHjIVXRsASK5e75YuQPV1jVAWCRXVhTxYtVkEfGm0Nkp1y1uQSYGUaf/kSZCn7Rr3TGMfi92iwOc=) 2026-03-13 00:22:34.430905 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA2kQSHaPtptkYJmgjTVMmwnAJFbZCtPs1eBtrlBEABP) 2026-03-13 00:22:34.430925 | orchestrator | 2026-03-13 00:22:34.430946 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-13 00:22:34.430966 | orchestrator | Friday 13 March 2026 00:22:31 +0000 (0:00:00.931) 0:00:07.863 ********** 2026-03-13 00:22:34.430988 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPpZjVbOpb3n8L7Vmgji98v81BCoqJJYsD7gLVnxPHsE) 2026-03-13 00:22:34.431011 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDTcHsSY1LIItSuVpitQjC4ZbNDXzLbTh7xHOvYXGFdM+4l/krM3s09KFNcwD5gCiLDZcLa00ZxqFqRQ5KPSoAPPISGi6/dpNhGOSjoqcouKLRj//xBBRKHgwVGPHdj9D1CEJ/+C9Wez+JkBo3dBQM4kEkx9FKp/LLWW2BuHyKwXe65HZLINUwAL7GnHke6MFp0q6vT4IIhuIH9+j9sXe2BtUYTdubPFhDej9rgK6oS8F74go9Q0hSZIljCAwpV/9GvpPYHUp7ciV/+kdt9/wyiZyG0WCQki+G2tiW31pv3UpsUNI+oi05AKKk3uorbm1sgAjH/ydiK4AXMyivvluA0ooubP9hBo4S66cwooRGSdTwKlqK4sdCDUz1KKIAyG2u/VVU9txD0qTYIvvbY7/KJZz1xhpZyU/UmHa5qPUbF9jbXNUqRtLoDY+0eUp9knSTqR9I4Wuuk88F++dPCp7uo2GNMGLGQbFWcM1rBC8M9RHZWPvDminib0lmSrPmCS6U=) 2026-03-13 00:22:34.431123 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBE2ZBbGKo/yzbqjBFlBNfERoze2KMHqSTeHmADWZEsfG+2eIBh1W+BCNLYUlVjrJEj5u2Qkhd5dDA32q/nN7jLE=) 2026-03-13 00:22:34.431148 | orchestrator | 2026-03-13 00:22:34.431169 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-13 00:22:34.431190 | orchestrator | Friday 13 March 2026 00:22:32 +0000 (0:00:00.926) 0:00:08.790 ********** 2026-03-13 00:22:34.431212 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIzd76tPZpGoBlM71wJb9/9XvcXfFYM42vFB4+/X9P/U) 2026-03-13 00:22:34.431234 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDD7I8IAg84aaOhA1XgiCFmLtUikwHsrtvcXM5oi6QE5s+vM4FtGrnezgKMXN5lhdoS/18EMiMkZvbHEB4PG5H1DJNpLT8pL1vCXCVNBMOD/D4tVOMTNg4/Bk05ssmoZ7xH0RwolkTgnrTAEXicf0B1ZrK4xURpIpjDADGm0TAEoGBcHkwzEbpvmnD4ip5Yg7hauJCU3xyC5Kn1qypqKFSfgLT8v7kzN/SAtCoT16oR7ca69LEJQf0lP0YlMj0bGM7cw5dyqG1H3LjH6R621mF5NhSHtEKwqaK0tK+lq04eMPbz7dsonHWMm6lqqasjAiQUl2eNNPiPCj4I3dDRdnF1IiV9ihUcWvjMrEJvZ7QTAMXVwTH1ywYgWH5NobOqJ795ZhJHkuYa9C0E7DWmhVAaA1+rps9yOkrrMA2ja00rT5tfj1WS8qR+nbj9CUEUzA9FrMjlBkqKWSm9K6nwFL4JZaPaf1IgWDTGOaKvjoWQARj4+uv5Hxl9zeG6AtQrwbU=) 2026-03-13 00:22:34.431255 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIWTYIMQbShS7+i0JI3HHKqPlZ7NblzuXsuB8KOUDcWM3fLioEip3MyEHwCUtOeRK+HSSNkmBL9KMrNv7uo36qo=) 2026-03-13 00:22:34.431305 | orchestrator | 2026-03-13 00:22:34.431325 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-13 00:22:34.431346 | orchestrator | Friday 13 March 2026 00:22:33 +0000 (0:00:00.927) 0:00:09.718 ********** 2026-03-13 00:22:34.431364 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIb0u0YG1dKoKq2Kv82lHmbehfTNhFFJ/VR68wjV890nvsVxnFJG8trrpbNG77zuGS0e//hqIH3Ic78kjKJUVJE=) 2026-03-13 00:22:34.431382 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMe9v3uIDDh/EoimsEioWBYs3Ryri7fY5cl1/AHWnEm4) 2026-03-13 00:22:34.431415 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDmBA4h10++Wz+UjdkXiPzUu2viNZ1Ms/Km0nOiK1mhO2K9T63OVVCBGLBTrKHPi5HECEV4T2eUohwfYys5ifhDa1OKW3fQL96Q+obTQ8XbLBO5eejZH+sHhLyWXczaMqquzcGMq0FTW1sx4mIpIrHmq5wT4+FihK7cySrGkqoluRU/rO7Y69/hSTBM6FQTSxpkUMQOtvUQrE83d2D0v/8uFIo/sG7kGQwIlWeA7ghts73iWfxHbJYLqxlKWKOKkcSz8b4kxZqkWnj1X7F2vDmD4EylNxFd6OR1Dz7NK4QyHP4P6PW/YpRveLwbgKCs1sd0uYNzlJIYLbmrTmLJR/so0T7LeMMu5zFfgvSLH2jJR2djYRC0IK7oAdFV/Ac1g5Yvy5r0J+Fja8prLZL0MB9MlRm7nYevaVJkvJr1gf6MgNsefVX63UC56S0np3IpqPFf4se9vmU999hgcnwINX4oOD9uOUCncvgTMOwAObycYkEp6WjMy6RHTZEJoE9JUG8=) 2026-03-13 00:22:34.431434 | orchestrator | 2026-03-13 00:22:34.431453 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-13 00:22:34.431472 | orchestrator | Friday 13 March 2026 00:22:34 +0000 (0:00:00.968) 0:00:10.686 ********** 2026-03-13 00:22:34.431511 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDShfjIRh8Kr1thRB6fpJ1lwYqjEAbmFEAem9er+sRB9fnlpcG0NrLOOSCqytqFlaFZhrBMn37+T4Hh3RrB49fBGg18xRfcFeVISo/Ik5o7+qAAc08IdzDLeeYsnHx8ds0J9IRn689jwEmqLu79ufzO8591mMhvY1giOAA0r6LwkS/8XZjCeIdacttn5pId5lHSLfh1TPMsqA9cK3ulFOkJmBOAylJ2BCu269kraY1hd5Z3eMmr2YQvbj6Bqn1S7trCHxHJXwhPUOg+hC5iLzfJZaD8liZuusSU3DQhkb16esR4peNsh6rAnDlExPUlV22rxgxRsNK1p5rtVlkgUU1FlhjLPUBYBT95ySfrWxsYerXyFZj3nq14SNPzbRSmXDurBp9Ct2AvOWaUpCoRggiQdnMjEyEPROtTGsUkxRpW7XsSA2n3mgUBfTrCRuJpTAEuY4lyWgwZLe2kWPxwYrRk56aDSK5HK6Wvwbb6bgeM8gHdKRDYyfcwexTX9Z2ZtB8=) 2026-03-13 00:22:45.263826 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKwZ564vniHDrVAcMnRX5Cy5ZI6PKq6fRp+RvJwCbGRm95lPOU2UEappOGTCpbp9TD2KNTetp1oZ7mYq0FkO/W0=) 2026-03-13 00:22:45.263919 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEIZC1lTEHpYFSgWuui7Dt02nNQuGCvhovYempKSYYHH) 2026-03-13 00:22:45.263932 | orchestrator | 2026-03-13 00:22:45.263941 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-13 00:22:45.263951 | orchestrator | Friday 13 March 2026 00:22:35 +0000 (0:00:00.908) 0:00:11.595 ********** 2026-03-13 00:22:45.263961 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/dS/vDqs1FBgER45yJ6b2eLPEkbyfAhBqikQLahzK6dsRp0ugoy/qJhhy0hujmX5W5+yIwo6xtM1EBqg2EsLY2FJaNfqiepCjZkycsUlVvQch4vklfdsPe3+fV6B81KJ1md49iIDmJi/mkd+ZZF/bqmC3JW9mMT8+VyK/7ercRL08//MvbEH60DeB9WQDQ11shFZn0JCj2JZnh+0ifZYwk2nvuad6DwH/Mhx0unXZCaWJwGy86yB6I/I2GJFQww6/1shRZsytQ41gdNGDDDPDaQ5T5udqBKBy7uMz++YOoAGVv1IhrlgipND7SXI+JJfDVP6wVq+Kb0c9YC3jYKgE8CbEltNReEdy6M4kVFh97tTu73M5cbqFLLnvkhnO+S1ch4MYrqlbHRYcdXDnjWkWJDLDmRy1hTiazFWNA8DQjIYvKzHdAVOUU0INxadcLgqmvxIL1lF+3L9+alymARxn+7tJKbYeWLNiLR2+blVFzXKG+hJ/fdaPcq1Y1JmKF/c=) 2026-03-13 00:22:45.263972 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICbpPnyUEzO19Lrn6Pr+yMdRnj6COGdlQYFHnukhAJ9C) 2026-03-13 00:22:45.263993 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHMvp9WJbDTWcD0oqd3SMq7BEH7h/lIfTfyb0zNj4LzBvAtAZwrElsAHqdKT/7QM+QOPa0bL0QYTnbUQf0teSS4=) 2026-03-13 00:22:45.264001 | orchestrator | 2026-03-13 00:22:45.264010 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-03-13 00:22:45.264019 | orchestrator | Friday 13 March 2026 00:22:35 +0000 (0:00:00.927) 0:00:12.523 ********** 2026-03-13 00:22:45.264028 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-13 00:22:45.264037 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-13 00:22:45.264044 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-13 00:22:45.264052 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-13 00:22:45.264060 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-13 00:22:45.264084 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-13 00:22:45.264092 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-13 00:22:45.264119 | orchestrator | 2026-03-13 00:22:45.264128 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-03-13 00:22:45.264137 | orchestrator | Friday 13 March 2026 00:22:40 +0000 (0:00:05.018) 0:00:17.541 ********** 2026-03-13 00:22:45.264146 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-13 00:22:45.264156 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-13 00:22:45.264164 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-13 00:22:45.264172 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-13 00:22:45.264180 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-13 00:22:45.264187 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-13 00:22:45.264195 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-13 00:22:45.264203 | orchestrator | 2026-03-13 00:22:45.264211 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-13 00:22:45.264219 | orchestrator | Friday 13 March 2026 00:22:41 +0000 (0:00:00.163) 0:00:17.704 ********** 2026-03-13 00:22:45.264246 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDzBEngU2eyveT0OSb9MwWFHKJLU4TWMdFyrsvYzPje8rHVxFeSh/sRwJqukq1fXC/rka5Stfc6eTJViiN4GqrTTq8MWSE1soA+8vHzjC9T/bT5BNfhFnUW+10BA2UNOuuiEuGXdQhiYRKtLo3YIGurlcKTr7Wz5OBVHv1u7PZ6ukCY720TPimza8DC+JTOyYDH4EiJUWcoRSaz0WTORhzoxFHVh042QXfSnz2Wjhc9AOq50whynDWW1pu05ucFuGUCuiPLHWp4/nAqYspd0RRJBAQPvaLsrBQPjAdtRtozCcoZY1mIO0rg9msZQ8fdikk3/qrThQHHyMqNM9EA+TrfzucinENMtYjCU2RArkKIaViN9Q79pdmq+2T2lNii2R2vRh23t1VnyLCPnVtrykwEpns++mjRbXoJryeRT/X4u9lY/Q1ajO3w4nFVbOf9+m63QSHmKRj/x6c7mcT41+8SMnunKJfBtXZ5wJo8tNw2LoKoLLNMMPv5PLcFRHLvy70=) 2026-03-13 00:22:45.264293 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOpp4iYIYESTOm4AITZGsNwi6H2W/2Tc5DCSMx0UJ6PjM9SgDhm/9+utmu1uLSwNqBEl8b16EBPBSkcEdgjF6vI=) 2026-03-13 00:22:45.264303 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINEHvuAZPxmqZ9qaQKaBtMyCTKys+UAHrwSqBD/1tVr4) 2026-03-13 00:22:45.264311 | orchestrator | 2026-03-13 00:22:45.264319 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-13 00:22:45.264326 | orchestrator | Friday 13 March 2026 00:22:42 +0000 (0:00:01.027) 0:00:18.732 ********** 2026-03-13 00:22:45.264335 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCfgLHnH5FfEUJfOesAwBx/Atx5dq+BRZWwRMl8cBEzCDnI1St7oI9sAwXG3Tp1tv697SbEbDeoH0UdTXDwv+DbJQDqHDCaG5q/czfO2yJs+gGXcXOM2R9icbsEtnYHIsuE8C+JMfd1vyvc07m0eE4a7L5aDWPCzcPdxbgVdUqkUca1nvBNZXKSM8zX6bjVt+16O577EXZAvgVss+pkpb5gvrXAvz3Q4UERxGwaBds92jmF9uSdWbgtV8rjEqXirX6ogN9Ao0cdEU2AoN8GT7uxg8Hd5vLGTUoQuMcRDu70UJEbJm1jFfxafDlVQqmRo6M8VGRIQvGBacXAiI3obkfFeEmaMXzLGDbODWbtwAO49UG6XZAKJzXmqPIm33nc6BaKzaepiHSa/rJ5Dzn08quJUMnDdFto2d3JtaesVB3nljiNLuS2KEc//kMfyPUKjhOL6bO3xYTUHFvWfiMhQ3lxhzVaAaHjd2jgVPON3dkEt2tSldIb6hWFLnDZCAWPv20=) 2026-03-13 00:22:45.264349 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCls70OVy03sHjIVXRsASK5e75YuQPV1jVAWCRXVhTxYtVkEfGm0Nkp1y1uQSYGUaf/kSZCn7Rr3TGMfi92iwOc=) 2026-03-13 00:22:45.264357 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA2kQSHaPtptkYJmgjTVMmwnAJFbZCtPs1eBtrlBEABP) 2026-03-13 00:22:45.264365 | orchestrator | 2026-03-13 00:22:45.264373 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-13 00:22:45.264381 | orchestrator | Friday 13 March 2026 00:22:43 +0000 (0:00:01.059) 0:00:19.791 ********** 2026-03-13 00:22:45.264389 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPpZjVbOpb3n8L7Vmgji98v81BCoqJJYsD7gLVnxPHsE) 2026-03-13 00:22:45.264398 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDTcHsSY1LIItSuVpitQjC4ZbNDXzLbTh7xHOvYXGFdM+4l/krM3s09KFNcwD5gCiLDZcLa00ZxqFqRQ5KPSoAPPISGi6/dpNhGOSjoqcouKLRj//xBBRKHgwVGPHdj9D1CEJ/+C9Wez+JkBo3dBQM4kEkx9FKp/LLWW2BuHyKwXe65HZLINUwAL7GnHke6MFp0q6vT4IIhuIH9+j9sXe2BtUYTdubPFhDej9rgK6oS8F74go9Q0hSZIljCAwpV/9GvpPYHUp7ciV/+kdt9/wyiZyG0WCQki+G2tiW31pv3UpsUNI+oi05AKKk3uorbm1sgAjH/ydiK4AXMyivvluA0ooubP9hBo4S66cwooRGSdTwKlqK4sdCDUz1KKIAyG2u/VVU9txD0qTYIvvbY7/KJZz1xhpZyU/UmHa5qPUbF9jbXNUqRtLoDY+0eUp9knSTqR9I4Wuuk88F++dPCp7uo2GNMGLGQbFWcM1rBC8M9RHZWPvDminib0lmSrPmCS6U=) 2026-03-13 00:22:45.264406 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBE2ZBbGKo/yzbqjBFlBNfERoze2KMHqSTeHmADWZEsfG+2eIBh1W+BCNLYUlVjrJEj5u2Qkhd5dDA32q/nN7jLE=) 2026-03-13 00:22:45.264414 | orchestrator | 2026-03-13 00:22:45.264422 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-13 00:22:45.264430 | orchestrator | Friday 13 March 2026 00:22:44 +0000 (0:00:01.030) 0:00:20.822 ********** 2026-03-13 00:22:45.264438 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIzd76tPZpGoBlM71wJb9/9XvcXfFYM42vFB4+/X9P/U) 2026-03-13 00:22:45.264450 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDD7I8IAg84aaOhA1XgiCFmLtUikwHsrtvcXM5oi6QE5s+vM4FtGrnezgKMXN5lhdoS/18EMiMkZvbHEB4PG5H1DJNpLT8pL1vCXCVNBMOD/D4tVOMTNg4/Bk05ssmoZ7xH0RwolkTgnrTAEXicf0B1ZrK4xURpIpjDADGm0TAEoGBcHkwzEbpvmnD4ip5Yg7hauJCU3xyC5Kn1qypqKFSfgLT8v7kzN/SAtCoT16oR7ca69LEJQf0lP0YlMj0bGM7cw5dyqG1H3LjH6R621mF5NhSHtEKwqaK0tK+lq04eMPbz7dsonHWMm6lqqasjAiQUl2eNNPiPCj4I3dDRdnF1IiV9ihUcWvjMrEJvZ7QTAMXVwTH1ywYgWH5NobOqJ795ZhJHkuYa9C0E7DWmhVAaA1+rps9yOkrrMA2ja00rT5tfj1WS8qR+nbj9CUEUzA9FrMjlBkqKWSm9K6nwFL4JZaPaf1IgWDTGOaKvjoWQARj4+uv5Hxl9zeG6AtQrwbU=) 2026-03-13 00:22:45.264467 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIWTYIMQbShS7+i0JI3HHKqPlZ7NblzuXsuB8KOUDcWM3fLioEip3MyEHwCUtOeRK+HSSNkmBL9KMrNv7uo36qo=) 2026-03-13 00:22:49.520784 | orchestrator | 2026-03-13 00:22:49.520889 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-13 00:22:49.520910 | orchestrator | Friday 13 March 2026 00:22:45 +0000 (0:00:01.046) 0:00:21.869 ********** 2026-03-13 00:22:49.520944 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDmBA4h10++Wz+UjdkXiPzUu2viNZ1Ms/Km0nOiK1mhO2K9T63OVVCBGLBTrKHPi5HECEV4T2eUohwfYys5ifhDa1OKW3fQL96Q+obTQ8XbLBO5eejZH+sHhLyWXczaMqquzcGMq0FTW1sx4mIpIrHmq5wT4+FihK7cySrGkqoluRU/rO7Y69/hSTBM6FQTSxpkUMQOtvUQrE83d2D0v/8uFIo/sG7kGQwIlWeA7ghts73iWfxHbJYLqxlKWKOKkcSz8b4kxZqkWnj1X7F2vDmD4EylNxFd6OR1Dz7NK4QyHP4P6PW/YpRveLwbgKCs1sd0uYNzlJIYLbmrTmLJR/so0T7LeMMu5zFfgvSLH2jJR2djYRC0IK7oAdFV/Ac1g5Yvy5r0J+Fja8prLZL0MB9MlRm7nYevaVJkvJr1gf6MgNsefVX63UC56S0np3IpqPFf4se9vmU999hgcnwINX4oOD9uOUCncvgTMOwAObycYkEp6WjMy6RHTZEJoE9JUG8=) 2026-03-13 00:22:49.520962 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIb0u0YG1dKoKq2Kv82lHmbehfTNhFFJ/VR68wjV890nvsVxnFJG8trrpbNG77zuGS0e//hqIH3Ic78kjKJUVJE=) 2026-03-13 00:22:49.521002 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMe9v3uIDDh/EoimsEioWBYs3Ryri7fY5cl1/AHWnEm4) 2026-03-13 00:22:49.521016 | orchestrator | 2026-03-13 00:22:49.521028 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-13 00:22:49.521040 | orchestrator | Friday 13 March 2026 00:22:46 +0000 (0:00:01.003) 0:00:22.873 ********** 2026-03-13 00:22:49.521052 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEIZC1lTEHpYFSgWuui7Dt02nNQuGCvhovYempKSYYHH) 2026-03-13 00:22:49.521065 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDShfjIRh8Kr1thRB6fpJ1lwYqjEAbmFEAem9er+sRB9fnlpcG0NrLOOSCqytqFlaFZhrBMn37+T4Hh3RrB49fBGg18xRfcFeVISo/Ik5o7+qAAc08IdzDLeeYsnHx8ds0J9IRn689jwEmqLu79ufzO8591mMhvY1giOAA0r6LwkS/8XZjCeIdacttn5pId5lHSLfh1TPMsqA9cK3ulFOkJmBOAylJ2BCu269kraY1hd5Z3eMmr2YQvbj6Bqn1S7trCHxHJXwhPUOg+hC5iLzfJZaD8liZuusSU3DQhkb16esR4peNsh6rAnDlExPUlV22rxgxRsNK1p5rtVlkgUU1FlhjLPUBYBT95ySfrWxsYerXyFZj3nq14SNPzbRSmXDurBp9Ct2AvOWaUpCoRggiQdnMjEyEPROtTGsUkxRpW7XsSA2n3mgUBfTrCRuJpTAEuY4lyWgwZLe2kWPxwYrRk56aDSK5HK6Wvwbb6bgeM8gHdKRDYyfcwexTX9Z2ZtB8=) 2026-03-13 00:22:49.521079 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKwZ564vniHDrVAcMnRX5Cy5ZI6PKq6fRp+RvJwCbGRm95lPOU2UEappOGTCpbp9TD2KNTetp1oZ7mYq0FkO/W0=) 2026-03-13 00:22:49.521092 | orchestrator | 2026-03-13 00:22:49.521106 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-13 00:22:49.521114 | orchestrator | Friday 13 March 2026 00:22:47 +0000 (0:00:01.084) 0:00:23.957 ********** 2026-03-13 00:22:49.521121 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHMvp9WJbDTWcD0oqd3SMq7BEH7h/lIfTfyb0zNj4LzBvAtAZwrElsAHqdKT/7QM+QOPa0bL0QYTnbUQf0teSS4=) 2026-03-13 00:22:49.521132 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/dS/vDqs1FBgER45yJ6b2eLPEkbyfAhBqikQLahzK6dsRp0ugoy/qJhhy0hujmX5W5+yIwo6xtM1EBqg2EsLY2FJaNfqiepCjZkycsUlVvQch4vklfdsPe3+fV6B81KJ1md49iIDmJi/mkd+ZZF/bqmC3JW9mMT8+VyK/7ercRL08//MvbEH60DeB9WQDQ11shFZn0JCj2JZnh+0ifZYwk2nvuad6DwH/Mhx0unXZCaWJwGy86yB6I/I2GJFQww6/1shRZsytQ41gdNGDDDPDaQ5T5udqBKBy7uMz++YOoAGVv1IhrlgipND7SXI+JJfDVP6wVq+Kb0c9YC3jYKgE8CbEltNReEdy6M4kVFh97tTu73M5cbqFLLnvkhnO+S1ch4MYrqlbHRYcdXDnjWkWJDLDmRy1hTiazFWNA8DQjIYvKzHdAVOUU0INxadcLgqmvxIL1lF+3L9+alymARxn+7tJKbYeWLNiLR2+blVFzXKG+hJ/fdaPcq1Y1JmKF/c=) 2026-03-13 00:22:49.521144 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICbpPnyUEzO19Lrn6Pr+yMdRnj6COGdlQYFHnukhAJ9C) 2026-03-13 00:22:49.521157 | orchestrator | 2026-03-13 00:22:49.521168 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-03-13 00:22:49.521180 | orchestrator | Friday 13 March 2026 00:22:48 +0000 (0:00:01.019) 0:00:24.977 ********** 2026-03-13 00:22:49.521194 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-13 00:22:49.521207 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-13 00:22:49.521219 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-13 00:22:49.521230 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-13 00:22:49.521238 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-13 00:22:49.521250 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-13 00:22:49.521319 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-13 00:22:49.521334 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:22:49.521347 | orchestrator | 2026-03-13 00:22:49.521380 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-03-13 00:22:49.521394 | orchestrator | Friday 13 March 2026 00:22:48 +0000 (0:00:00.169) 0:00:25.147 ********** 2026-03-13 00:22:49.521417 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:22:49.521431 | orchestrator | 2026-03-13 00:22:49.521445 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-03-13 00:22:49.521458 | orchestrator | Friday 13 March 2026 00:22:48 +0000 (0:00:00.051) 0:00:25.199 ********** 2026-03-13 00:22:49.521471 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:22:49.521484 | orchestrator | 2026-03-13 00:22:49.521497 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-03-13 00:22:49.521510 | orchestrator | Friday 13 March 2026 00:22:48 +0000 (0:00:00.046) 0:00:25.245 ********** 2026-03-13 00:22:49.521524 | orchestrator | changed: [testbed-manager] 2026-03-13 00:22:49.521537 | orchestrator | 2026-03-13 00:22:49.521550 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 00:22:49.521564 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-13 00:22:49.521578 | orchestrator | 2026-03-13 00:22:49.521590 | orchestrator | 2026-03-13 00:22:49.521604 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 00:22:49.521618 | orchestrator | Friday 13 March 2026 00:22:49 +0000 (0:00:00.683) 0:00:25.928 ********** 2026-03-13 00:22:49.521631 | orchestrator | =============================================================================== 2026-03-13 00:22:49.521644 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.57s 2026-03-13 00:22:49.521656 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.02s 2026-03-13 00:22:49.521670 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2026-03-13 00:22:49.521683 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-03-13 00:22:49.521695 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-03-13 00:22:49.521708 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-03-13 00:22:49.521722 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-03-13 00:22:49.521735 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-03-13 00:22:49.521747 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-03-13 00:22:49.521760 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-03-13 00:22:49.521774 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.97s 2026-03-13 00:22:49.521787 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.93s 2026-03-13 00:22:49.521800 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.93s 2026-03-13 00:22:49.521824 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.93s 2026-03-13 00:22:49.521838 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.93s 2026-03-13 00:22:49.521850 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.91s 2026-03-13 00:22:49.521863 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.68s 2026-03-13 00:22:49.521877 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.17s 2026-03-13 00:22:49.521889 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.16s 2026-03-13 00:22:49.521902 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.15s 2026-03-13 00:22:49.804199 | orchestrator | + osism apply squid 2026-03-13 00:23:01.842425 | orchestrator | 2026-03-13 00:23:01 | INFO  | Prepare task for execution of squid. 2026-03-13 00:23:01.911859 | orchestrator | 2026-03-13 00:23:01 | INFO  | Task aa30bc5d-1287-4439-ae5a-e62297eda138 (squid) was prepared for execution. 2026-03-13 00:23:01.911954 | orchestrator | 2026-03-13 00:23:01 | INFO  | It takes a moment until task aa30bc5d-1287-4439-ae5a-e62297eda138 (squid) has been started and output is visible here. 2026-03-13 00:25:06.265776 | orchestrator | 2026-03-13 00:25:06.265888 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-03-13 00:25:06.265905 | orchestrator | 2026-03-13 00:25:06.265918 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-03-13 00:25:06.265930 | orchestrator | Friday 13 March 2026 00:23:05 +0000 (0:00:00.154) 0:00:00.154 ********** 2026-03-13 00:25:06.265941 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-03-13 00:25:06.265953 | orchestrator | 2026-03-13 00:25:06.265964 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-03-13 00:25:06.265975 | orchestrator | Friday 13 March 2026 00:23:06 +0000 (0:00:00.079) 0:00:00.233 ********** 2026-03-13 00:25:06.265986 | orchestrator | ok: [testbed-manager] 2026-03-13 00:25:06.265998 | orchestrator | 2026-03-13 00:25:06.266009 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-03-13 00:25:06.266085 | orchestrator | Friday 13 March 2026 00:23:07 +0000 (0:00:01.367) 0:00:01.601 ********** 2026-03-13 00:25:06.266097 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-03-13 00:25:06.266109 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-03-13 00:25:06.266120 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-03-13 00:25:06.266132 | orchestrator | 2026-03-13 00:25:06.266143 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-03-13 00:25:06.266154 | orchestrator | Friday 13 March 2026 00:23:08 +0000 (0:00:01.125) 0:00:02.726 ********** 2026-03-13 00:25:06.266165 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-03-13 00:25:06.266176 | orchestrator | 2026-03-13 00:25:06.266231 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-03-13 00:25:06.266243 | orchestrator | Friday 13 March 2026 00:23:09 +0000 (0:00:01.031) 0:00:03.757 ********** 2026-03-13 00:25:06.266255 | orchestrator | ok: [testbed-manager] 2026-03-13 00:25:06.266265 | orchestrator | 2026-03-13 00:25:06.266276 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-03-13 00:25:06.266305 | orchestrator | Friday 13 March 2026 00:23:09 +0000 (0:00:00.343) 0:00:04.101 ********** 2026-03-13 00:25:06.266317 | orchestrator | changed: [testbed-manager] 2026-03-13 00:25:06.266330 | orchestrator | 2026-03-13 00:25:06.266344 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-03-13 00:25:06.266357 | orchestrator | Friday 13 March 2026 00:23:10 +0000 (0:00:00.883) 0:00:04.984 ********** 2026-03-13 00:25:06.266370 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-03-13 00:25:06.266384 | orchestrator | ok: [testbed-manager] 2026-03-13 00:25:06.266396 | orchestrator | 2026-03-13 00:25:06.266408 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-03-13 00:25:06.266422 | orchestrator | Friday 13 March 2026 00:23:49 +0000 (0:00:38.594) 0:00:43.578 ********** 2026-03-13 00:25:06.266436 | orchestrator | changed: [testbed-manager] 2026-03-13 00:25:06.266448 | orchestrator | 2026-03-13 00:25:06.266461 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-03-13 00:25:06.266474 | orchestrator | Friday 13 March 2026 00:24:05 +0000 (0:00:15.866) 0:00:59.445 ********** 2026-03-13 00:25:06.266487 | orchestrator | Pausing for 60 seconds 2026-03-13 00:25:06.266499 | orchestrator | changed: [testbed-manager] 2026-03-13 00:25:06.266513 | orchestrator | 2026-03-13 00:25:06.266525 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-03-13 00:25:06.266538 | orchestrator | Friday 13 March 2026 00:25:05 +0000 (0:01:00.089) 0:01:59.535 ********** 2026-03-13 00:25:06.266550 | orchestrator | ok: [testbed-manager] 2026-03-13 00:25:06.266563 | orchestrator | 2026-03-13 00:25:06.266576 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-03-13 00:25:06.266612 | orchestrator | Friday 13 March 2026 00:25:05 +0000 (0:00:00.075) 0:01:59.610 ********** 2026-03-13 00:25:06.266625 | orchestrator | changed: [testbed-manager] 2026-03-13 00:25:06.266639 | orchestrator | 2026-03-13 00:25:06.266651 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 00:25:06.266664 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 00:25:06.266678 | orchestrator | 2026-03-13 00:25:06.266689 | orchestrator | 2026-03-13 00:25:06.266700 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 00:25:06.266711 | orchestrator | Friday 13 March 2026 00:25:06 +0000 (0:00:00.594) 0:02:00.205 ********** 2026-03-13 00:25:06.266722 | orchestrator | =============================================================================== 2026-03-13 00:25:06.266733 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.09s 2026-03-13 00:25:06.266744 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 38.59s 2026-03-13 00:25:06.266756 | orchestrator | osism.services.squid : Restart squid service --------------------------- 15.87s 2026-03-13 00:25:06.266766 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.37s 2026-03-13 00:25:06.266777 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.13s 2026-03-13 00:25:06.266788 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.03s 2026-03-13 00:25:06.266799 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.88s 2026-03-13 00:25:06.266810 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.60s 2026-03-13 00:25:06.266850 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.34s 2026-03-13 00:25:06.266861 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.08s 2026-03-13 00:25:06.266872 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.08s 2026-03-13 00:25:06.547515 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-13 00:25:06.547619 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla 2026-03-13 00:25:06.554317 | orchestrator | + set -e 2026-03-13 00:25:06.554393 | orchestrator | + NAMESPACE=kolla 2026-03-13 00:25:06.554407 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-03-13 00:25:06.561334 | orchestrator | ++ semver latest 9.0.0 2026-03-13 00:25:06.623640 | orchestrator | + [[ -1 -lt 0 ]] 2026-03-13 00:25:06.623737 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-13 00:25:06.624404 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-03-13 00:25:18.646891 | orchestrator | 2026-03-13 00:25:18 | INFO  | Prepare task for execution of operator. 2026-03-13 00:25:18.735689 | orchestrator | 2026-03-13 00:25:18 | INFO  | Task 9a9609f0-0b24-4ef0-9d11-d46c3b34e863 (operator) was prepared for execution. 2026-03-13 00:25:18.735909 | orchestrator | 2026-03-13 00:25:18 | INFO  | It takes a moment until task 9a9609f0-0b24-4ef0-9d11-d46c3b34e863 (operator) has been started and output is visible here. 2026-03-13 00:25:35.522658 | orchestrator | 2026-03-13 00:25:35.522817 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-03-13 00:25:35.522838 | orchestrator | 2026-03-13 00:25:35.522850 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-13 00:25:35.522862 | orchestrator | Friday 13 March 2026 00:25:23 +0000 (0:00:00.157) 0:00:00.157 ********** 2026-03-13 00:25:35.522874 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:25:35.522887 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:25:35.522898 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:25:35.522908 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:25:35.522919 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:25:35.522930 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:25:35.522945 | orchestrator | 2026-03-13 00:25:35.522957 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-03-13 00:25:35.522989 | orchestrator | Friday 13 March 2026 00:25:26 +0000 (0:00:03.498) 0:00:03.656 ********** 2026-03-13 00:25:35.523001 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:25:35.523011 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:25:35.523022 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:25:35.523033 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:25:35.523043 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:25:35.523054 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:25:35.523065 | orchestrator | 2026-03-13 00:25:35.523076 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-03-13 00:25:35.523089 | orchestrator | 2026-03-13 00:25:35.523101 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-13 00:25:35.523114 | orchestrator | Friday 13 March 2026 00:25:27 +0000 (0:00:00.786) 0:00:04.442 ********** 2026-03-13 00:25:35.523125 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:25:35.523137 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:25:35.523149 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:25:35.523161 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:25:35.523256 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:25:35.523311 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:25:35.523324 | orchestrator | 2026-03-13 00:25:35.523337 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-13 00:25:35.523350 | orchestrator | Friday 13 March 2026 00:25:27 +0000 (0:00:00.167) 0:00:04.610 ********** 2026-03-13 00:25:35.523361 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:25:35.523479 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:25:35.523493 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:25:35.523504 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:25:35.523533 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:25:35.523544 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:25:35.523555 | orchestrator | 2026-03-13 00:25:35.523634 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-13 00:25:35.523791 | orchestrator | Friday 13 March 2026 00:25:28 +0000 (0:00:00.157) 0:00:04.767 ********** 2026-03-13 00:25:35.523811 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:25:35.523830 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:25:35.523847 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:25:35.523865 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:25:35.523883 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:25:35.524037 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:25:35.524050 | orchestrator | 2026-03-13 00:25:35.524061 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-13 00:25:35.524121 | orchestrator | Friday 13 March 2026 00:25:28 +0000 (0:00:00.626) 0:00:05.393 ********** 2026-03-13 00:25:35.524133 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:25:35.524143 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:25:35.524155 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:25:35.524165 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:25:35.524205 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:25:35.524217 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:25:35.524228 | orchestrator | 2026-03-13 00:25:35.524240 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-13 00:25:35.524251 | orchestrator | Friday 13 March 2026 00:25:29 +0000 (0:00:00.805) 0:00:06.199 ********** 2026-03-13 00:25:35.524262 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-03-13 00:25:35.524273 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-03-13 00:25:35.524336 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-03-13 00:25:35.524349 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-03-13 00:25:35.524360 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-03-13 00:25:35.524371 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-03-13 00:25:35.524382 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-03-13 00:25:35.524393 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-03-13 00:25:35.524404 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-03-13 00:25:35.524428 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-03-13 00:25:35.524439 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-03-13 00:25:35.524450 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-03-13 00:25:35.524461 | orchestrator | 2026-03-13 00:25:35.524472 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-13 00:25:35.524483 | orchestrator | Friday 13 March 2026 00:25:30 +0000 (0:00:01.249) 0:00:07.449 ********** 2026-03-13 00:25:35.524494 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:25:35.524505 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:25:35.524515 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:25:35.524526 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:25:35.524537 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:25:35.524548 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:25:35.524558 | orchestrator | 2026-03-13 00:25:35.524569 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-13 00:25:35.524586 | orchestrator | Friday 13 March 2026 00:25:32 +0000 (0:00:01.274) 0:00:08.723 ********** 2026-03-13 00:25:35.524607 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-03-13 00:25:35.524624 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-03-13 00:25:35.524642 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-03-13 00:25:35.524658 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-03-13 00:25:35.524677 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-03-13 00:25:35.524721 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-03-13 00:25:35.524741 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-03-13 00:25:35.524760 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-03-13 00:25:35.524778 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-03-13 00:25:35.524796 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-03-13 00:25:35.524815 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-03-13 00:25:35.524833 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-03-13 00:25:35.524965 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-03-13 00:25:35.524980 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-03-13 00:25:35.524992 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-03-13 00:25:35.525003 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-03-13 00:25:35.525023 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-03-13 00:25:35.525034 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-03-13 00:25:35.525045 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-03-13 00:25:35.525056 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-03-13 00:25:35.525067 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-03-13 00:25:35.525078 | orchestrator | 2026-03-13 00:25:35.525089 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-13 00:25:35.525101 | orchestrator | Friday 13 March 2026 00:25:33 +0000 (0:00:01.318) 0:00:10.042 ********** 2026-03-13 00:25:35.525111 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:25:35.525122 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:25:35.525133 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:25:35.525144 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:25:35.525155 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:25:35.525166 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:25:35.525230 | orchestrator | 2026-03-13 00:25:35.525243 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-13 00:25:35.525265 | orchestrator | Friday 13 March 2026 00:25:33 +0000 (0:00:00.171) 0:00:10.213 ********** 2026-03-13 00:25:35.525406 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:25:35.525429 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:25:35.525445 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:25:35.525461 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:25:35.525478 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:25:35.525494 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:25:35.525510 | orchestrator | 2026-03-13 00:25:35.525525 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-13 00:25:35.525535 | orchestrator | Friday 13 March 2026 00:25:33 +0000 (0:00:00.187) 0:00:10.401 ********** 2026-03-13 00:25:35.525545 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:25:35.525554 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:25:35.525563 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:25:35.525573 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:25:35.525582 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:25:35.525591 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:25:35.525601 | orchestrator | 2026-03-13 00:25:35.525610 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-13 00:25:35.525620 | orchestrator | Friday 13 March 2026 00:25:34 +0000 (0:00:00.622) 0:00:11.024 ********** 2026-03-13 00:25:35.525629 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:25:35.525639 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:25:35.525648 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:25:35.525657 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:25:35.525667 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:25:35.525676 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:25:35.525686 | orchestrator | 2026-03-13 00:25:35.525695 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-13 00:25:35.525705 | orchestrator | Friday 13 March 2026 00:25:34 +0000 (0:00:00.179) 0:00:11.203 ********** 2026-03-13 00:25:35.525715 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-13 00:25:35.525804 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:25:35.525817 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-13 00:25:35.525826 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-13 00:25:35.525836 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:25:35.525845 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:25:35.525855 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-13 00:25:35.525865 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:25:35.525921 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-13 00:25:35.525930 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:25:35.525940 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-13 00:25:35.525950 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:25:35.525959 | orchestrator | 2026-03-13 00:25:35.525969 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-13 00:25:35.525978 | orchestrator | Friday 13 March 2026 00:25:35 +0000 (0:00:00.721) 0:00:11.925 ********** 2026-03-13 00:25:35.525988 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:25:35.525997 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:25:35.526007 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:25:35.526280 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:25:35.526350 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:25:35.526360 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:25:35.526370 | orchestrator | 2026-03-13 00:25:35.526380 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-13 00:25:35.526390 | orchestrator | Friday 13 March 2026 00:25:35 +0000 (0:00:00.173) 0:00:12.099 ********** 2026-03-13 00:25:35.526400 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:25:35.526409 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:25:35.526419 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:25:35.526428 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:25:35.526463 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:25:36.946704 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:25:36.946859 | orchestrator | 2026-03-13 00:25:36.946881 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-13 00:25:36.946902 | orchestrator | Friday 13 March 2026 00:25:35 +0000 (0:00:00.154) 0:00:12.253 ********** 2026-03-13 00:25:36.946920 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:25:36.946939 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:25:36.946958 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:25:36.946977 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:25:36.946995 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:25:36.947014 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:25:36.947026 | orchestrator | 2026-03-13 00:25:36.947036 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-13 00:25:36.947047 | orchestrator | Friday 13 March 2026 00:25:35 +0000 (0:00:00.165) 0:00:12.419 ********** 2026-03-13 00:25:36.947058 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:25:36.947069 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:25:36.947080 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:25:36.947091 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:25:36.947102 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:25:36.947112 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:25:36.947123 | orchestrator | 2026-03-13 00:25:36.947134 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-13 00:25:36.947145 | orchestrator | Friday 13 March 2026 00:25:36 +0000 (0:00:00.700) 0:00:13.119 ********** 2026-03-13 00:25:36.947156 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:25:36.947166 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:25:36.947216 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:25:36.947230 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:25:36.947243 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:25:36.947255 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:25:36.947267 | orchestrator | 2026-03-13 00:25:36.947279 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 00:25:36.947293 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-13 00:25:36.947337 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-13 00:25:36.947350 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-13 00:25:36.947363 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-13 00:25:36.947376 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-13 00:25:36.947388 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-13 00:25:36.947398 | orchestrator | 2026-03-13 00:25:36.947409 | orchestrator | 2026-03-13 00:25:36.947420 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 00:25:36.947431 | orchestrator | Friday 13 March 2026 00:25:36 +0000 (0:00:00.272) 0:00:13.392 ********** 2026-03-13 00:25:36.947442 | orchestrator | =============================================================================== 2026-03-13 00:25:36.947452 | orchestrator | Gathering Facts --------------------------------------------------------- 3.50s 2026-03-13 00:25:36.947463 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.32s 2026-03-13 00:25:36.947475 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.27s 2026-03-13 00:25:36.947513 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.25s 2026-03-13 00:25:36.947524 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.81s 2026-03-13 00:25:36.947535 | orchestrator | Do not require tty for all users ---------------------------------------- 0.79s 2026-03-13 00:25:36.947545 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.72s 2026-03-13 00:25:36.947556 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.70s 2026-03-13 00:25:36.947566 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.63s 2026-03-13 00:25:36.947577 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.62s 2026-03-13 00:25:36.947588 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.27s 2026-03-13 00:25:36.947599 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.19s 2026-03-13 00:25:36.947610 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.18s 2026-03-13 00:25:36.947621 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.17s 2026-03-13 00:25:36.947632 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.17s 2026-03-13 00:25:36.947642 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.17s 2026-03-13 00:25:36.947653 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.17s 2026-03-13 00:25:36.947663 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.16s 2026-03-13 00:25:36.947674 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.15s 2026-03-13 00:25:37.251569 | orchestrator | + osism apply --environment custom facts 2026-03-13 00:25:39.405095 | orchestrator | 2026-03-13 00:25:39 | INFO  | Trying to run play facts in environment custom 2026-03-13 00:25:49.463980 | orchestrator | 2026-03-13 00:25:49 | INFO  | Prepare task for execution of facts. 2026-03-13 00:25:49.534764 | orchestrator | 2026-03-13 00:25:49 | INFO  | Task ae504ab0-6075-46b5-98e9-be4be9e8dbb9 (facts) was prepared for execution. 2026-03-13 00:25:49.534852 | orchestrator | 2026-03-13 00:25:49 | INFO  | It takes a moment until task ae504ab0-6075-46b5-98e9-be4be9e8dbb9 (facts) has been started and output is visible here. 2026-03-13 00:26:35.118452 | orchestrator | 2026-03-13 00:26:35.118547 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-03-13 00:26:35.118561 | orchestrator | 2026-03-13 00:26:35.118571 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-13 00:26:35.118594 | orchestrator | Friday 13 March 2026 00:25:53 +0000 (0:00:00.071) 0:00:00.071 ********** 2026-03-13 00:26:35.118602 | orchestrator | ok: [testbed-manager] 2026-03-13 00:26:35.118612 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:26:35.118621 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:26:35.118629 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:26:35.118637 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:26:35.118658 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:26:35.118667 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:26:35.118682 | orchestrator | 2026-03-13 00:26:35.118691 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-03-13 00:26:35.118699 | orchestrator | Friday 13 March 2026 00:25:55 +0000 (0:00:01.531) 0:00:01.603 ********** 2026-03-13 00:26:35.118707 | orchestrator | ok: [testbed-manager] 2026-03-13 00:26:35.118715 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:26:35.118723 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:26:35.118731 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:26:35.118739 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:26:35.118748 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:26:35.118756 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:26:35.118764 | orchestrator | 2026-03-13 00:26:35.118789 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-03-13 00:26:35.118798 | orchestrator | 2026-03-13 00:26:35.118806 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-13 00:26:35.118814 | orchestrator | Friday 13 March 2026 00:25:56 +0000 (0:00:01.236) 0:00:02.840 ********** 2026-03-13 00:26:35.118822 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:26:35.118830 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:26:35.118838 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:26:35.118846 | orchestrator | 2026-03-13 00:26:35.118854 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-13 00:26:35.118863 | orchestrator | Friday 13 March 2026 00:25:56 +0000 (0:00:00.100) 0:00:02.940 ********** 2026-03-13 00:26:35.118871 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:26:35.118879 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:26:35.118887 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:26:35.118895 | orchestrator | 2026-03-13 00:26:35.118903 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-13 00:26:35.118911 | orchestrator | Friday 13 March 2026 00:25:56 +0000 (0:00:00.196) 0:00:03.137 ********** 2026-03-13 00:26:35.118919 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:26:35.118927 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:26:35.118935 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:26:35.118943 | orchestrator | 2026-03-13 00:26:35.118950 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-13 00:26:35.118959 | orchestrator | Friday 13 March 2026 00:25:56 +0000 (0:00:00.222) 0:00:03.360 ********** 2026-03-13 00:26:35.118968 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-13 00:26:35.118977 | orchestrator | 2026-03-13 00:26:35.118985 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-13 00:26:35.118993 | orchestrator | Friday 13 March 2026 00:25:57 +0000 (0:00:00.139) 0:00:03.500 ********** 2026-03-13 00:26:35.119001 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:26:35.119011 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:26:35.119020 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:26:35.119029 | orchestrator | 2026-03-13 00:26:35.119038 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-13 00:26:35.119048 | orchestrator | Friday 13 March 2026 00:25:57 +0000 (0:00:00.430) 0:00:03.931 ********** 2026-03-13 00:26:35.119057 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:26:35.119066 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:26:35.119074 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:26:35.119083 | orchestrator | 2026-03-13 00:26:35.119093 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-13 00:26:35.119102 | orchestrator | Friday 13 March 2026 00:25:57 +0000 (0:00:00.123) 0:00:04.054 ********** 2026-03-13 00:26:35.119111 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:26:35.119119 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:26:35.119128 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:26:35.119137 | orchestrator | 2026-03-13 00:26:35.119161 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-13 00:26:35.119170 | orchestrator | Friday 13 March 2026 00:25:58 +0000 (0:00:01.035) 0:00:05.089 ********** 2026-03-13 00:26:35.119179 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:26:35.119188 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:26:35.119197 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:26:35.119206 | orchestrator | 2026-03-13 00:26:35.119216 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-13 00:26:35.119225 | orchestrator | Friday 13 March 2026 00:25:59 +0000 (0:00:00.441) 0:00:05.530 ********** 2026-03-13 00:26:35.119234 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:26:35.119243 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:26:35.119253 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:26:35.119261 | orchestrator | 2026-03-13 00:26:35.119275 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-13 00:26:35.119283 | orchestrator | Friday 13 March 2026 00:26:00 +0000 (0:00:01.115) 0:00:06.646 ********** 2026-03-13 00:26:35.119291 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:26:35.119299 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:26:35.119307 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:26:35.119315 | orchestrator | 2026-03-13 00:26:35.119323 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-03-13 00:26:35.119331 | orchestrator | Friday 13 March 2026 00:26:17 +0000 (0:00:17.459) 0:00:24.105 ********** 2026-03-13 00:26:35.119339 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:26:35.119347 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:26:35.119355 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:26:35.119363 | orchestrator | 2026-03-13 00:26:35.119371 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-03-13 00:26:35.119394 | orchestrator | Friday 13 March 2026 00:26:17 +0000 (0:00:00.097) 0:00:24.203 ********** 2026-03-13 00:26:35.119402 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:26:35.119410 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:26:35.119418 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:26:35.119426 | orchestrator | 2026-03-13 00:26:35.119434 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-13 00:26:35.119443 | orchestrator | Friday 13 March 2026 00:26:25 +0000 (0:00:08.137) 0:00:32.340 ********** 2026-03-13 00:26:35.119451 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:26:35.119459 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:26:35.119467 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:26:35.119475 | orchestrator | 2026-03-13 00:26:35.119483 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-13 00:26:35.119491 | orchestrator | Friday 13 March 2026 00:26:26 +0000 (0:00:00.441) 0:00:32.782 ********** 2026-03-13 00:26:35.119499 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-03-13 00:26:35.119508 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-03-13 00:26:35.119516 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-03-13 00:26:35.119524 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-03-13 00:26:35.119532 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-03-13 00:26:35.119540 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-03-13 00:26:35.119548 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-03-13 00:26:35.119556 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-03-13 00:26:35.119564 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-03-13 00:26:35.119572 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-03-13 00:26:35.119580 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-03-13 00:26:35.119588 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-03-13 00:26:35.119596 | orchestrator | 2026-03-13 00:26:35.119604 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-13 00:26:35.119612 | orchestrator | Friday 13 March 2026 00:26:29 +0000 (0:00:03.593) 0:00:36.375 ********** 2026-03-13 00:26:35.119620 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:26:35.119628 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:26:35.119636 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:26:35.119644 | orchestrator | 2026-03-13 00:26:35.119652 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-13 00:26:35.119659 | orchestrator | 2026-03-13 00:26:35.119668 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-13 00:26:35.119676 | orchestrator | Friday 13 March 2026 00:26:31 +0000 (0:00:01.348) 0:00:37.723 ********** 2026-03-13 00:26:35.119684 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:26:35.119697 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:26:35.119705 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:26:35.119713 | orchestrator | ok: [testbed-manager] 2026-03-13 00:26:35.119721 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:26:35.119761 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:26:35.119770 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:26:35.119778 | orchestrator | 2026-03-13 00:26:35.119786 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 00:26:35.119794 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 00:26:35.119803 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 00:26:35.119812 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 00:26:35.119820 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 00:26:35.119828 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-13 00:26:35.119836 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-13 00:26:35.119844 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-13 00:26:35.119852 | orchestrator | 2026-03-13 00:26:35.119860 | orchestrator | 2026-03-13 00:26:35.119868 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 00:26:35.119876 | orchestrator | Friday 13 March 2026 00:26:35 +0000 (0:00:03.800) 0:00:41.524 ********** 2026-03-13 00:26:35.119884 | orchestrator | =============================================================================== 2026-03-13 00:26:35.119892 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.46s 2026-03-13 00:26:35.119900 | orchestrator | Install required packages (Debian) -------------------------------------- 8.14s 2026-03-13 00:26:35.119908 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.80s 2026-03-13 00:26:35.119916 | orchestrator | Copy fact files --------------------------------------------------------- 3.59s 2026-03-13 00:26:35.119924 | orchestrator | Create custom facts directory ------------------------------------------- 1.53s 2026-03-13 00:26:35.119932 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.35s 2026-03-13 00:26:35.119944 | orchestrator | Copy fact file ---------------------------------------------------------- 1.24s 2026-03-13 00:26:35.303890 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.12s 2026-03-13 00:26:35.303996 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.04s 2026-03-13 00:26:35.304008 | orchestrator | Create custom facts directory ------------------------------------------- 0.44s 2026-03-13 00:26:35.304016 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.44s 2026-03-13 00:26:35.304024 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.43s 2026-03-13 00:26:35.304032 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.22s 2026-03-13 00:26:35.304041 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.20s 2026-03-13 00:26:35.304048 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.14s 2026-03-13 00:26:35.304057 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.12s 2026-03-13 00:26:35.304065 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.10s 2026-03-13 00:26:35.304073 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2026-03-13 00:26:35.608388 | orchestrator | + osism apply bootstrap 2026-03-13 00:26:47.740736 | orchestrator | 2026-03-13 00:26:47 | INFO  | Prepare task for execution of bootstrap. 2026-03-13 00:26:47.812839 | orchestrator | 2026-03-13 00:26:47 | INFO  | Task fc2dbd1a-d2be-49fd-bee3-4dfe7d482ddc (bootstrap) was prepared for execution. 2026-03-13 00:26:47.812917 | orchestrator | 2026-03-13 00:26:47 | INFO  | It takes a moment until task fc2dbd1a-d2be-49fd-bee3-4dfe7d482ddc (bootstrap) has been started and output is visible here. 2026-03-13 00:27:03.408248 | orchestrator | 2026-03-13 00:27:03.408340 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-03-13 00:27:03.408352 | orchestrator | 2026-03-13 00:27:03.408360 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-03-13 00:27:03.408367 | orchestrator | Friday 13 March 2026 00:26:51 +0000 (0:00:00.104) 0:00:00.104 ********** 2026-03-13 00:27:03.408375 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:27:03.408383 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:27:03.408390 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:27:03.408397 | orchestrator | ok: [testbed-manager] 2026-03-13 00:27:03.408404 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:27:03.408410 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:27:03.408417 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:27:03.408424 | orchestrator | 2026-03-13 00:27:03.408431 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-13 00:27:03.408438 | orchestrator | 2026-03-13 00:27:03.408445 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-13 00:27:03.408452 | orchestrator | Friday 13 March 2026 00:26:51 +0000 (0:00:00.188) 0:00:00.293 ********** 2026-03-13 00:27:03.408458 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:27:03.408466 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:27:03.408473 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:27:03.408480 | orchestrator | ok: [testbed-manager] 2026-03-13 00:27:03.408486 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:27:03.408493 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:27:03.408500 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:27:03.408506 | orchestrator | 2026-03-13 00:27:03.408513 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-03-13 00:27:03.408520 | orchestrator | 2026-03-13 00:27:03.408527 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-13 00:27:03.408533 | orchestrator | Friday 13 March 2026 00:26:55 +0000 (0:00:03.719) 0:00:04.012 ********** 2026-03-13 00:27:03.408541 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-13 00:27:03.408548 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-13 00:27:03.408555 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-13 00:27:03.408562 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-13 00:27:03.408568 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-13 00:27:03.408575 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-13 00:27:03.408582 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-13 00:27:03.408589 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-03-13 00:27:03.408595 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-13 00:27:03.408602 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-03-13 00:27:03.408609 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-13 00:27:03.408616 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-13 00:27:03.408623 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-13 00:27:03.408629 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-13 00:27:03.408636 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-13 00:27:03.408643 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-13 00:27:03.408667 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-13 00:27:03.408675 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-03-13 00:27:03.408681 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-13 00:27:03.408688 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-13 00:27:03.408695 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-13 00:27:03.408702 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:27:03.408709 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-13 00:27:03.408715 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-13 00:27:03.408722 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:27:03.408729 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-13 00:27:03.408736 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-13 00:27:03.408754 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-13 00:27:03.408762 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-13 00:27:03.408768 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-13 00:27:03.408775 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-13 00:27:03.408782 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-13 00:27:03.408790 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:27:03.408798 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-13 00:27:03.408805 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-13 00:27:03.408813 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-13 00:27:03.408821 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-13 00:27:03.408829 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-13 00:27:03.408836 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:27:03.408844 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-03-13 00:27:03.408852 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-13 00:27:03.408860 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-13 00:27:03.408868 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-03-13 00:27:03.408875 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-13 00:27:03.408883 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-13 00:27:03.408891 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-13 00:27:03.408912 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-03-13 00:27:03.408919 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-13 00:27:03.408926 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-13 00:27:03.408933 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:27:03.408939 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-13 00:27:03.408946 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-13 00:27:03.408953 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:27:03.408960 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-13 00:27:03.408966 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-13 00:27:03.408973 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:27:03.408980 | orchestrator | 2026-03-13 00:27:03.408986 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-03-13 00:27:03.408993 | orchestrator | 2026-03-13 00:27:03.409000 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-03-13 00:27:03.409007 | orchestrator | Friday 13 March 2026 00:26:56 +0000 (0:00:00.382) 0:00:04.395 ********** 2026-03-13 00:27:03.409014 | orchestrator | ok: [testbed-manager] 2026-03-13 00:27:03.409020 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:27:03.409032 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:27:03.409039 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:27:03.409046 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:27:03.409052 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:27:03.409059 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:27:03.409065 | orchestrator | 2026-03-13 00:27:03.409072 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-03-13 00:27:03.409079 | orchestrator | Friday 13 March 2026 00:26:57 +0000 (0:00:01.170) 0:00:05.565 ********** 2026-03-13 00:27:03.409086 | orchestrator | ok: [testbed-manager] 2026-03-13 00:27:03.409092 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:27:03.409099 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:27:03.409106 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:27:03.409113 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:27:03.409119 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:27:03.409126 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:27:03.409156 | orchestrator | 2026-03-13 00:27:03.409164 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-03-13 00:27:03.409171 | orchestrator | Friday 13 March 2026 00:26:58 +0000 (0:00:01.388) 0:00:06.954 ********** 2026-03-13 00:27:03.409179 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:27:03.409188 | orchestrator | 2026-03-13 00:27:03.409194 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-03-13 00:27:03.409201 | orchestrator | Friday 13 March 2026 00:26:58 +0000 (0:00:00.240) 0:00:07.195 ********** 2026-03-13 00:27:03.409208 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:27:03.409215 | orchestrator | changed: [testbed-manager] 2026-03-13 00:27:03.409221 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:27:03.409228 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:27:03.409235 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:27:03.409241 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:27:03.409248 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:27:03.409255 | orchestrator | 2026-03-13 00:27:03.409262 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-03-13 00:27:03.409268 | orchestrator | Friday 13 March 2026 00:27:00 +0000 (0:00:01.986) 0:00:09.181 ********** 2026-03-13 00:27:03.409275 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:27:03.409283 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:27:03.409291 | orchestrator | 2026-03-13 00:27:03.409297 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-03-13 00:27:03.409304 | orchestrator | Friday 13 March 2026 00:27:01 +0000 (0:00:00.246) 0:00:09.428 ********** 2026-03-13 00:27:03.409311 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:27:03.409317 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:27:03.409324 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:27:03.409331 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:27:03.409337 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:27:03.409351 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:27:03.409358 | orchestrator | 2026-03-13 00:27:03.409365 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-03-13 00:27:03.409372 | orchestrator | Friday 13 March 2026 00:27:02 +0000 (0:00:01.098) 0:00:10.526 ********** 2026-03-13 00:27:03.409379 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:27:03.409385 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:27:03.409392 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:27:03.409399 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:27:03.409405 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:27:03.409412 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:27:03.409423 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:27:03.409430 | orchestrator | 2026-03-13 00:27:03.409437 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-03-13 00:27:03.409444 | orchestrator | Friday 13 March 2026 00:27:02 +0000 (0:00:00.628) 0:00:11.155 ********** 2026-03-13 00:27:03.409451 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:27:03.409458 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:27:03.409464 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:27:03.409471 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:27:03.409478 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:27:03.409484 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:27:03.409491 | orchestrator | ok: [testbed-manager] 2026-03-13 00:27:03.409498 | orchestrator | 2026-03-13 00:27:03.409505 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-13 00:27:03.409513 | orchestrator | Friday 13 March 2026 00:27:03 +0000 (0:00:00.477) 0:00:11.632 ********** 2026-03-13 00:27:03.409520 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:27:03.409526 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:27:03.409538 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:27:14.672371 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:27:14.672461 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:27:14.672471 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:27:14.672479 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:27:14.672486 | orchestrator | 2026-03-13 00:27:14.672494 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-13 00:27:14.672504 | orchestrator | Friday 13 March 2026 00:27:03 +0000 (0:00:00.188) 0:00:11.820 ********** 2026-03-13 00:27:14.672513 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:27:14.672531 | orchestrator | 2026-03-13 00:27:14.672540 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-13 00:27:14.672549 | orchestrator | Friday 13 March 2026 00:27:03 +0000 (0:00:00.295) 0:00:12.116 ********** 2026-03-13 00:27:14.672556 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:27:14.672564 | orchestrator | 2026-03-13 00:27:14.672572 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-13 00:27:14.672579 | orchestrator | Friday 13 March 2026 00:27:04 +0000 (0:00:00.380) 0:00:12.496 ********** 2026-03-13 00:27:14.672586 | orchestrator | ok: [testbed-manager] 2026-03-13 00:27:14.672595 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:27:14.672603 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:27:14.672611 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:27:14.672618 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:27:14.672626 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:27:14.672633 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:27:14.672640 | orchestrator | 2026-03-13 00:27:14.672648 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-13 00:27:14.672654 | orchestrator | Friday 13 March 2026 00:27:05 +0000 (0:00:01.351) 0:00:13.848 ********** 2026-03-13 00:27:14.672662 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:27:14.672670 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:27:14.672676 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:27:14.672684 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:27:14.672691 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:27:14.672698 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:27:14.672705 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:27:14.672713 | orchestrator | 2026-03-13 00:27:14.672720 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-13 00:27:14.672745 | orchestrator | Friday 13 March 2026 00:27:05 +0000 (0:00:00.230) 0:00:14.078 ********** 2026-03-13 00:27:14.672753 | orchestrator | ok: [testbed-manager] 2026-03-13 00:27:14.672760 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:27:14.672767 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:27:14.672775 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:27:14.672782 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:27:14.672789 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:27:14.672796 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:27:14.672804 | orchestrator | 2026-03-13 00:27:14.672811 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-13 00:27:14.672818 | orchestrator | Friday 13 March 2026 00:27:06 +0000 (0:00:00.537) 0:00:14.616 ********** 2026-03-13 00:27:14.672826 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:27:14.672833 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:27:14.672840 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:27:14.672847 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:27:14.672854 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:27:14.672862 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:27:14.672869 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:27:14.672876 | orchestrator | 2026-03-13 00:27:14.672884 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-13 00:27:14.672893 | orchestrator | Friday 13 March 2026 00:27:06 +0000 (0:00:00.234) 0:00:14.850 ********** 2026-03-13 00:27:14.672901 | orchestrator | ok: [testbed-manager] 2026-03-13 00:27:14.672908 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:27:14.672921 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:27:14.672929 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:27:14.672937 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:27:14.672945 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:27:14.672953 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:27:14.672961 | orchestrator | 2026-03-13 00:27:14.672969 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-13 00:27:14.672977 | orchestrator | Friday 13 March 2026 00:27:07 +0000 (0:00:00.649) 0:00:15.499 ********** 2026-03-13 00:27:14.672984 | orchestrator | ok: [testbed-manager] 2026-03-13 00:27:14.672992 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:27:14.673001 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:27:14.673009 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:27:14.673016 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:27:14.673024 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:27:14.673031 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:27:14.673038 | orchestrator | 2026-03-13 00:27:14.673047 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-13 00:27:14.673055 | orchestrator | Friday 13 March 2026 00:27:08 +0000 (0:00:01.089) 0:00:16.588 ********** 2026-03-13 00:27:14.673063 | orchestrator | ok: [testbed-manager] 2026-03-13 00:27:14.673070 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:27:14.673079 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:27:14.673087 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:27:14.673094 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:27:14.673102 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:27:14.673110 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:27:14.673117 | orchestrator | 2026-03-13 00:27:14.673125 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-13 00:27:14.673156 | orchestrator | Friday 13 March 2026 00:27:09 +0000 (0:00:01.029) 0:00:17.618 ********** 2026-03-13 00:27:14.673179 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:27:14.673188 | orchestrator | 2026-03-13 00:27:14.673196 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-13 00:27:14.673204 | orchestrator | Friday 13 March 2026 00:27:09 +0000 (0:00:00.249) 0:00:17.868 ********** 2026-03-13 00:27:14.673218 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:27:14.673226 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:27:14.673234 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:27:14.673242 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:27:14.673250 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:27:14.673258 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:27:14.673265 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:27:14.673273 | orchestrator | 2026-03-13 00:27:14.673280 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-13 00:27:14.673288 | orchestrator | Friday 13 March 2026 00:27:10 +0000 (0:00:01.200) 0:00:19.068 ********** 2026-03-13 00:27:14.673295 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:27:14.673302 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:27:14.673310 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:27:14.673317 | orchestrator | ok: [testbed-manager] 2026-03-13 00:27:14.673325 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:27:14.673332 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:27:14.673339 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:27:14.673346 | orchestrator | 2026-03-13 00:27:14.673354 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-13 00:27:14.673361 | orchestrator | Friday 13 March 2026 00:27:10 +0000 (0:00:00.167) 0:00:19.236 ********** 2026-03-13 00:27:14.673369 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:27:14.673376 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:27:14.673384 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:27:14.673391 | orchestrator | ok: [testbed-manager] 2026-03-13 00:27:14.673398 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:27:14.673406 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:27:14.673413 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:27:14.673421 | orchestrator | 2026-03-13 00:27:14.673428 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-13 00:27:14.673436 | orchestrator | Friday 13 March 2026 00:27:11 +0000 (0:00:00.192) 0:00:19.429 ********** 2026-03-13 00:27:14.673443 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:27:14.673450 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:27:14.673457 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:27:14.673464 | orchestrator | ok: [testbed-manager] 2026-03-13 00:27:14.673472 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:27:14.673479 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:27:14.673486 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:27:14.673493 | orchestrator | 2026-03-13 00:27:14.673501 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-13 00:27:14.673508 | orchestrator | Friday 13 March 2026 00:27:11 +0000 (0:00:00.171) 0:00:19.600 ********** 2026-03-13 00:27:14.673516 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:27:14.673525 | orchestrator | 2026-03-13 00:27:14.673532 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-13 00:27:14.673539 | orchestrator | Friday 13 March 2026 00:27:11 +0000 (0:00:00.210) 0:00:19.811 ********** 2026-03-13 00:27:14.673546 | orchestrator | ok: [testbed-manager] 2026-03-13 00:27:14.673553 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:27:14.673561 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:27:14.673568 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:27:14.673575 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:27:14.673582 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:27:14.673590 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:27:14.673597 | orchestrator | 2026-03-13 00:27:14.673605 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-13 00:27:14.673612 | orchestrator | Friday 13 March 2026 00:27:11 +0000 (0:00:00.489) 0:00:20.300 ********** 2026-03-13 00:27:14.673619 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:27:14.673627 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:27:14.673640 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:27:14.673648 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:27:14.673656 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:27:14.673664 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:27:14.673671 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:27:14.673678 | orchestrator | 2026-03-13 00:27:14.673686 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-13 00:27:14.673694 | orchestrator | Friday 13 March 2026 00:27:12 +0000 (0:00:00.182) 0:00:20.483 ********** 2026-03-13 00:27:14.673701 | orchestrator | ok: [testbed-manager] 2026-03-13 00:27:14.673709 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:27:14.673716 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:27:14.673724 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:27:14.673731 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:27:14.673739 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:27:14.673746 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:27:14.673754 | orchestrator | 2026-03-13 00:27:14.673761 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-13 00:27:14.673768 | orchestrator | Friday 13 March 2026 00:27:13 +0000 (0:00:00.991) 0:00:21.474 ********** 2026-03-13 00:27:14.673774 | orchestrator | ok: [testbed-manager] 2026-03-13 00:27:14.673780 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:27:14.673787 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:27:14.673793 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:27:14.673799 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:27:14.673806 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:27:14.673813 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:27:14.673821 | orchestrator | 2026-03-13 00:27:14.673828 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-13 00:27:14.673835 | orchestrator | Friday 13 March 2026 00:27:13 +0000 (0:00:00.507) 0:00:21.982 ********** 2026-03-13 00:27:14.673843 | orchestrator | ok: [testbed-manager] 2026-03-13 00:27:14.673850 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:27:14.673857 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:27:14.673864 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:27:14.673878 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:27:54.469414 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:27:54.469521 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:27:54.469537 | orchestrator | 2026-03-13 00:27:54.469549 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-13 00:27:54.469561 | orchestrator | Friday 13 March 2026 00:27:14 +0000 (0:00:01.063) 0:00:23.045 ********** 2026-03-13 00:27:54.469572 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:27:54.469582 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:27:54.469592 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:27:54.469602 | orchestrator | changed: [testbed-manager] 2026-03-13 00:27:54.469612 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:27:54.469621 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:27:54.469631 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:27:54.469640 | orchestrator | 2026-03-13 00:27:54.469651 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-03-13 00:27:54.469661 | orchestrator | Friday 13 March 2026 00:27:30 +0000 (0:00:15.703) 0:00:38.749 ********** 2026-03-13 00:27:54.469671 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:27:54.469681 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:27:54.469691 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:27:54.469700 | orchestrator | ok: [testbed-manager] 2026-03-13 00:27:54.469710 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:27:54.469720 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:27:54.469729 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:27:54.469739 | orchestrator | 2026-03-13 00:27:54.469749 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-03-13 00:27:54.469758 | orchestrator | Friday 13 March 2026 00:27:30 +0000 (0:00:00.208) 0:00:38.957 ********** 2026-03-13 00:27:54.469768 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:27:54.469800 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:27:54.469811 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:27:54.469821 | orchestrator | ok: [testbed-manager] 2026-03-13 00:27:54.469830 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:27:54.469840 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:27:54.469849 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:27:54.469859 | orchestrator | 2026-03-13 00:27:54.469869 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-03-13 00:27:54.469878 | orchestrator | Friday 13 March 2026 00:27:30 +0000 (0:00:00.205) 0:00:39.163 ********** 2026-03-13 00:27:54.469888 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:27:54.469897 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:27:54.469907 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:27:54.469917 | orchestrator | ok: [testbed-manager] 2026-03-13 00:27:54.469926 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:27:54.469936 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:27:54.469945 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:27:54.469958 | orchestrator | 2026-03-13 00:27:54.469976 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-03-13 00:27:54.469994 | orchestrator | Friday 13 March 2026 00:27:31 +0000 (0:00:00.209) 0:00:39.372 ********** 2026-03-13 00:27:54.470089 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:27:54.470184 | orchestrator | 2026-03-13 00:27:54.470206 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-03-13 00:27:54.470223 | orchestrator | Friday 13 March 2026 00:27:31 +0000 (0:00:00.298) 0:00:39.670 ********** 2026-03-13 00:27:54.470283 | orchestrator | ok: [testbed-manager] 2026-03-13 00:27:54.470300 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:27:54.470317 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:27:54.470332 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:27:54.470373 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:27:54.470391 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:27:54.470407 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:27:54.470424 | orchestrator | 2026-03-13 00:27:54.470440 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-03-13 00:27:54.470457 | orchestrator | Friday 13 March 2026 00:27:33 +0000 (0:00:01.811) 0:00:41.482 ********** 2026-03-13 00:27:54.470474 | orchestrator | changed: [testbed-manager] 2026-03-13 00:27:54.470492 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:27:54.470508 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:27:54.470525 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:27:54.470542 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:27:54.470558 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:27:54.470582 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:27:54.470599 | orchestrator | 2026-03-13 00:27:54.470617 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-03-13 00:27:54.470634 | orchestrator | Friday 13 March 2026 00:27:34 +0000 (0:00:01.087) 0:00:42.569 ********** 2026-03-13 00:27:54.470650 | orchestrator | ok: [testbed-manager] 2026-03-13 00:27:54.470667 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:27:54.470684 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:27:54.470701 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:27:54.470718 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:27:54.470735 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:27:54.470751 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:27:54.470767 | orchestrator | 2026-03-13 00:27:54.470784 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-03-13 00:27:54.470801 | orchestrator | Friday 13 March 2026 00:27:35 +0000 (0:00:00.971) 0:00:43.541 ********** 2026-03-13 00:27:54.470820 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:27:54.470854 | orchestrator | 2026-03-13 00:27:54.470871 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-03-13 00:27:54.470888 | orchestrator | Friday 13 March 2026 00:27:35 +0000 (0:00:00.301) 0:00:43.842 ********** 2026-03-13 00:27:54.470905 | orchestrator | changed: [testbed-manager] 2026-03-13 00:27:54.470922 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:27:54.470939 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:27:54.470955 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:27:54.470971 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:27:54.470988 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:27:54.471005 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:27:54.471021 | orchestrator | 2026-03-13 00:27:54.471063 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-03-13 00:27:54.471081 | orchestrator | Friday 13 March 2026 00:27:36 +0000 (0:00:01.019) 0:00:44.861 ********** 2026-03-13 00:27:54.471098 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:27:54.471140 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:27:54.471158 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:27:54.471174 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:27:54.471189 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:27:54.471204 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:27:54.471221 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:27:54.471237 | orchestrator | 2026-03-13 00:27:54.471254 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-03-13 00:27:54.471271 | orchestrator | Friday 13 March 2026 00:27:36 +0000 (0:00:00.239) 0:00:45.100 ********** 2026-03-13 00:27:54.471289 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:27:54.471306 | orchestrator | 2026-03-13 00:27:54.471323 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-03-13 00:27:54.471340 | orchestrator | Friday 13 March 2026 00:27:37 +0000 (0:00:00.278) 0:00:45.378 ********** 2026-03-13 00:27:54.471356 | orchestrator | ok: [testbed-manager] 2026-03-13 00:27:54.471373 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:27:54.471391 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:27:54.471409 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:27:54.471425 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:27:54.471442 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:27:54.471458 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:27:54.471474 | orchestrator | 2026-03-13 00:27:54.471489 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-03-13 00:27:54.471505 | orchestrator | Friday 13 March 2026 00:27:38 +0000 (0:00:01.709) 0:00:47.088 ********** 2026-03-13 00:27:54.471521 | orchestrator | changed: [testbed-manager] 2026-03-13 00:27:54.471538 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:27:54.471548 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:27:54.471558 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:27:54.471567 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:27:54.471577 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:27:54.471586 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:27:54.471596 | orchestrator | 2026-03-13 00:27:54.471606 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-03-13 00:27:54.471615 | orchestrator | Friday 13 March 2026 00:27:39 +0000 (0:00:01.037) 0:00:48.125 ********** 2026-03-13 00:27:54.471625 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:27:54.471634 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:27:54.471644 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:27:54.471653 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:27:54.471663 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:27:54.471672 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:27:54.471692 | orchestrator | changed: [testbed-manager] 2026-03-13 00:27:54.471701 | orchestrator | 2026-03-13 00:27:54.471711 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-03-13 00:27:54.471721 | orchestrator | Friday 13 March 2026 00:27:51 +0000 (0:00:11.398) 0:00:59.523 ********** 2026-03-13 00:27:54.471730 | orchestrator | ok: [testbed-manager] 2026-03-13 00:27:54.471740 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:27:54.471749 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:27:54.471759 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:27:54.471768 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:27:54.471778 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:27:54.471787 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:27:54.471864 | orchestrator | 2026-03-13 00:27:54.471875 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-03-13 00:27:54.471885 | orchestrator | Friday 13 March 2026 00:27:52 +0000 (0:00:01.465) 0:01:00.989 ********** 2026-03-13 00:27:54.471895 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:27:54.471904 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:27:54.471914 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:27:54.471923 | orchestrator | ok: [testbed-manager] 2026-03-13 00:27:54.471933 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:27:54.471943 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:27:54.471952 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:27:54.471962 | orchestrator | 2026-03-13 00:27:54.471978 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-03-13 00:27:54.471988 | orchestrator | Friday 13 March 2026 00:27:53 +0000 (0:00:01.010) 0:01:02.000 ********** 2026-03-13 00:27:54.471998 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:27:54.472008 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:27:54.472017 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:27:54.472027 | orchestrator | ok: [testbed-manager] 2026-03-13 00:27:54.472036 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:27:54.472046 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:27:54.472055 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:27:54.472065 | orchestrator | 2026-03-13 00:27:54.472075 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-03-13 00:27:54.472085 | orchestrator | Friday 13 March 2026 00:27:53 +0000 (0:00:00.242) 0:01:02.242 ********** 2026-03-13 00:27:54.472094 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:27:54.472104 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:27:54.472130 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:27:54.472140 | orchestrator | ok: [testbed-manager] 2026-03-13 00:27:54.472150 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:27:54.472159 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:27:54.472169 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:27:54.472178 | orchestrator | 2026-03-13 00:27:54.472188 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-03-13 00:27:54.472198 | orchestrator | Friday 13 March 2026 00:27:54 +0000 (0:00:00.238) 0:01:02.481 ********** 2026-03-13 00:27:54.472208 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:27:54.472219 | orchestrator | 2026-03-13 00:27:54.472240 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-03-13 00:30:08.695618 | orchestrator | Friday 13 March 2026 00:27:54 +0000 (0:00:00.320) 0:01:02.801 ********** 2026-03-13 00:30:08.695721 | orchestrator | ok: [testbed-manager] 2026-03-13 00:30:08.695732 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:30:08.695739 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:30:08.695746 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:30:08.695752 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:30:08.695758 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:30:08.695764 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:30:08.695770 | orchestrator | 2026-03-13 00:30:08.695776 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-03-13 00:30:08.695801 | orchestrator | Friday 13 March 2026 00:27:56 +0000 (0:00:01.776) 0:01:04.578 ********** 2026-03-13 00:30:08.695807 | orchestrator | changed: [testbed-manager] 2026-03-13 00:30:08.695814 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:30:08.695820 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:30:08.695826 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:30:08.695832 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:30:08.695838 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:30:08.695844 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:30:08.695850 | orchestrator | 2026-03-13 00:30:08.695857 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-03-13 00:30:08.695868 | orchestrator | Friday 13 March 2026 00:27:56 +0000 (0:00:00.609) 0:01:05.188 ********** 2026-03-13 00:30:08.695876 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:30:08.695885 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:30:08.695895 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:30:08.695905 | orchestrator | ok: [testbed-manager] 2026-03-13 00:30:08.695915 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:30:08.695922 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:30:08.695927 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:30:08.695933 | orchestrator | 2026-03-13 00:30:08.695939 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-03-13 00:30:08.695945 | orchestrator | Friday 13 March 2026 00:27:57 +0000 (0:00:00.223) 0:01:05.411 ********** 2026-03-13 00:30:08.695951 | orchestrator | ok: [testbed-manager] 2026-03-13 00:30:08.695957 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:30:08.695963 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:30:08.695968 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:30:08.695974 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:30:08.695980 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:30:08.695986 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:30:08.695991 | orchestrator | 2026-03-13 00:30:08.696006 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-03-13 00:30:08.696012 | orchestrator | Friday 13 March 2026 00:27:58 +0000 (0:00:01.397) 0:01:06.809 ********** 2026-03-13 00:30:08.696018 | orchestrator | changed: [testbed-manager] 2026-03-13 00:30:08.696024 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:30:08.696029 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:30:08.696035 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:30:08.696063 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:30:08.696070 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:30:08.696075 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:30:08.696081 | orchestrator | 2026-03-13 00:30:08.696087 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-03-13 00:30:08.696093 | orchestrator | Friday 13 March 2026 00:28:00 +0000 (0:00:01.786) 0:01:08.595 ********** 2026-03-13 00:30:08.696098 | orchestrator | ok: [testbed-manager] 2026-03-13 00:30:08.696104 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:30:08.696110 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:30:08.696115 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:30:08.696121 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:30:08.696127 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:30:08.696133 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:30:08.696140 | orchestrator | 2026-03-13 00:30:08.696151 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-03-13 00:30:08.696160 | orchestrator | Friday 13 March 2026 00:28:03 +0000 (0:00:02.844) 0:01:11.439 ********** 2026-03-13 00:30:08.696169 | orchestrator | ok: [testbed-manager] 2026-03-13 00:30:08.696177 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:30:08.696187 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:30:08.696197 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:30:08.696208 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:30:08.696218 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:30:08.696228 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:30:08.696235 | orchestrator | 2026-03-13 00:30:08.696241 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-03-13 00:30:08.696266 | orchestrator | Friday 13 March 2026 00:28:37 +0000 (0:00:34.404) 0:01:45.844 ********** 2026-03-13 00:30:08.696272 | orchestrator | changed: [testbed-manager] 2026-03-13 00:30:08.696278 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:30:08.696284 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:30:08.696289 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:30:08.696295 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:30:08.696301 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:30:08.696307 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:30:08.696312 | orchestrator | 2026-03-13 00:30:08.696318 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-03-13 00:30:08.696324 | orchestrator | Friday 13 March 2026 00:29:54 +0000 (0:01:16.969) 0:03:02.814 ********** 2026-03-13 00:30:08.696330 | orchestrator | changed: [testbed-manager] 2026-03-13 00:30:08.696336 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:30:08.696341 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:30:08.696347 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:30:08.696353 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:30:08.696358 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:30:08.696364 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:30:08.696370 | orchestrator | 2026-03-13 00:30:08.696376 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-03-13 00:30:08.696383 | orchestrator | Friday 13 March 2026 00:29:56 +0000 (0:00:01.525) 0:03:04.339 ********** 2026-03-13 00:30:08.696389 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:30:08.696394 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:30:08.696400 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:30:08.696406 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:30:08.696411 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:30:08.696417 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:30:08.696423 | orchestrator | changed: [testbed-manager] 2026-03-13 00:30:08.696429 | orchestrator | 2026-03-13 00:30:08.696435 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-03-13 00:30:08.696453 | orchestrator | Friday 13 March 2026 00:30:07 +0000 (0:00:11.416) 0:03:15.755 ********** 2026-03-13 00:30:08.696465 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-03-13 00:30:08.696480 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-03-13 00:30:08.696489 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-03-13 00:30:08.696496 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-13 00:30:08.696507 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-13 00:30:08.696513 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-03-13 00:30:08.696521 | orchestrator | 2026-03-13 00:30:08.696527 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-03-13 00:30:08.696533 | orchestrator | Friday 13 March 2026 00:30:07 +0000 (0:00:00.382) 0:03:16.138 ********** 2026-03-13 00:30:08.696539 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-13 00:30:08.696545 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:30:08.696551 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-13 00:30:08.696557 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-13 00:30:08.696563 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:30:08.696569 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-13 00:30:08.696575 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:30:08.696581 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:30:08.696587 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-13 00:30:08.696598 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-13 00:30:08.696604 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-13 00:30:08.696610 | orchestrator | 2026-03-13 00:30:08.696616 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-03-13 00:30:08.696621 | orchestrator | Friday 13 March 2026 00:30:08 +0000 (0:00:00.821) 0:03:16.959 ********** 2026-03-13 00:30:08.696627 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-13 00:30:08.696634 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-13 00:30:08.696640 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-13 00:30:08.696646 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-13 00:30:08.696652 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-13 00:30:08.696662 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-13 00:30:19.561498 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-13 00:30:19.562543 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-13 00:30:19.562609 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-13 00:30:19.562632 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-13 00:30:19.562644 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-13 00:30:19.562656 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-13 00:30:19.562668 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-13 00:30:19.562707 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-13 00:30:19.562719 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-13 00:30:19.562731 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-13 00:30:19.562742 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-13 00:30:19.562753 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-13 00:30:19.562764 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-13 00:30:19.562775 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-13 00:30:19.562786 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-13 00:30:19.562797 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-13 00:30:19.562808 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-13 00:30:19.562819 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-13 00:30:19.562830 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:30:19.562842 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-13 00:30:19.562853 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-13 00:30:19.562864 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-13 00:30:19.562875 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-13 00:30:19.562886 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-13 00:30:19.562897 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-13 00:30:19.562908 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-13 00:30:19.562919 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-13 00:30:19.562932 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:30:19.562950 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-13 00:30:19.562986 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-13 00:30:19.563004 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-13 00:30:19.563021 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-13 00:30:19.563085 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-13 00:30:19.563104 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-13 00:30:19.563123 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-13 00:30:19.563143 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-13 00:30:19.563163 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:30:19.563181 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:30:19.563200 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-13 00:30:19.563220 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-13 00:30:19.563238 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-13 00:30:19.563275 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-13 00:30:19.563294 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-13 00:30:19.563334 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-13 00:30:19.563346 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-13 00:30:19.563356 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-13 00:30:19.563367 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-13 00:30:19.563378 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-13 00:30:19.563389 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-13 00:30:19.563399 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-13 00:30:19.563410 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-13 00:30:19.563421 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-13 00:30:19.563431 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-13 00:30:19.563442 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-13 00:30:19.563453 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-13 00:30:19.563464 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-13 00:30:19.563475 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-13 00:30:19.563486 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-13 00:30:19.563496 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-13 00:30:19.563507 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-13 00:30:19.563518 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-13 00:30:19.563528 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-13 00:30:19.563539 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-13 00:30:19.563550 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-13 00:30:19.563564 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-13 00:30:19.563581 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-13 00:30:19.563609 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-13 00:30:19.563628 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-13 00:30:19.563645 | orchestrator | 2026-03-13 00:30:19.563663 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-03-13 00:30:19.563688 | orchestrator | Friday 13 March 2026 00:30:16 +0000 (0:00:07.934) 0:03:24.894 ********** 2026-03-13 00:30:19.563709 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-13 00:30:19.563727 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-13 00:30:19.563754 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-13 00:30:19.563772 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-13 00:30:19.563830 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-13 00:30:19.563849 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-13 00:30:19.563950 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-13 00:30:19.563973 | orchestrator | 2026-03-13 00:30:19.563989 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-03-13 00:30:19.564127 | orchestrator | Friday 13 March 2026 00:30:18 +0000 (0:00:01.498) 0:03:26.392 ********** 2026-03-13 00:30:19.564153 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-13 00:30:19.564192 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-13 00:30:19.564210 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:30:19.564229 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:30:19.564241 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-13 00:30:19.564252 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:30:19.564313 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-13 00:30:19.564327 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:30:19.564338 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-13 00:30:19.564349 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-13 00:30:19.564387 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-13 00:30:31.409106 | orchestrator | 2026-03-13 00:30:31.409203 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-03-13 00:30:31.409215 | orchestrator | Friday 13 March 2026 00:30:19 +0000 (0:00:01.521) 0:03:27.914 ********** 2026-03-13 00:30:31.409222 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-13 00:30:31.409230 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-13 00:30:31.409237 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:30:31.409245 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-13 00:30:31.409253 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:30:31.409258 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:30:31.409262 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-13 00:30:31.409266 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:30:31.409270 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-13 00:30:31.409275 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-13 00:30:31.409279 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-13 00:30:31.409283 | orchestrator | 2026-03-13 00:30:31.409287 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-03-13 00:30:31.409291 | orchestrator | Friday 13 March 2026 00:30:20 +0000 (0:00:00.602) 0:03:28.517 ********** 2026-03-13 00:30:31.409295 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-13 00:30:31.409298 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-13 00:30:31.409302 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:30:31.409306 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:30:31.409310 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-13 00:30:31.409329 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:30:31.409333 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-13 00:30:31.409337 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:30:31.409341 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-13 00:30:31.409345 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-13 00:30:31.409348 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-13 00:30:31.409352 | orchestrator | 2026-03-13 00:30:31.409356 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-03-13 00:30:31.409360 | orchestrator | Friday 13 March 2026 00:30:20 +0000 (0:00:00.529) 0:03:29.046 ********** 2026-03-13 00:30:31.409364 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:30:31.409368 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:30:31.409372 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:30:31.409376 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:30:31.409379 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:30:31.409383 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:30:31.409387 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:30:31.409391 | orchestrator | 2026-03-13 00:30:31.409394 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-03-13 00:30:31.409399 | orchestrator | Friday 13 March 2026 00:30:20 +0000 (0:00:00.270) 0:03:29.317 ********** 2026-03-13 00:30:31.409403 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:30:31.409408 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:30:31.409411 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:30:31.409415 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:30:31.409419 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:30:31.409423 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:30:31.409426 | orchestrator | ok: [testbed-manager] 2026-03-13 00:30:31.409430 | orchestrator | 2026-03-13 00:30:31.409436 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-03-13 00:30:31.409442 | orchestrator | Friday 13 March 2026 00:30:25 +0000 (0:00:04.640) 0:03:33.958 ********** 2026-03-13 00:30:31.409448 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-03-13 00:30:31.409454 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:30:31.409460 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-03-13 00:30:31.409467 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:30:31.409473 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-03-13 00:30:31.409478 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:30:31.409484 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-03-13 00:30:31.409502 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:30:31.409508 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-03-13 00:30:31.409521 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:30:31.409525 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-03-13 00:30:31.409529 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:30:31.409532 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-03-13 00:30:31.409536 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:30:31.409541 | orchestrator | 2026-03-13 00:30:31.409548 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-03-13 00:30:31.409553 | orchestrator | Friday 13 March 2026 00:30:26 +0000 (0:00:00.415) 0:03:34.373 ********** 2026-03-13 00:30:31.409556 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-03-13 00:30:31.409561 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-03-13 00:30:31.409566 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-03-13 00:30:31.409586 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-03-13 00:30:31.409593 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-03-13 00:30:31.409600 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-03-13 00:30:31.409613 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-03-13 00:30:31.409619 | orchestrator | 2026-03-13 00:30:31.409627 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-03-13 00:30:31.409632 | orchestrator | Friday 13 March 2026 00:30:27 +0000 (0:00:01.021) 0:03:35.395 ********** 2026-03-13 00:30:31.409642 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:30:31.409650 | orchestrator | 2026-03-13 00:30:31.409657 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-03-13 00:30:31.409663 | orchestrator | Friday 13 March 2026 00:30:27 +0000 (0:00:00.419) 0:03:35.815 ********** 2026-03-13 00:30:31.409670 | orchestrator | ok: [testbed-manager] 2026-03-13 00:30:31.409676 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:30:31.409683 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:30:31.409689 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:30:31.409696 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:30:31.409703 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:30:31.409710 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:30:31.409716 | orchestrator | 2026-03-13 00:30:31.409724 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-03-13 00:30:31.409729 | orchestrator | Friday 13 March 2026 00:30:28 +0000 (0:00:01.510) 0:03:37.325 ********** 2026-03-13 00:30:31.409733 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:30:31.409738 | orchestrator | ok: [testbed-manager] 2026-03-13 00:30:31.409742 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:30:31.409746 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:30:31.409751 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:30:31.409755 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:30:31.409760 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:30:31.409764 | orchestrator | 2026-03-13 00:30:31.409769 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-03-13 00:30:31.409774 | orchestrator | Friday 13 March 2026 00:30:29 +0000 (0:00:00.653) 0:03:37.979 ********** 2026-03-13 00:30:31.409780 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:30:31.409786 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:30:31.409808 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:30:31.409815 | orchestrator | changed: [testbed-manager] 2026-03-13 00:30:31.409822 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:30:31.409828 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:30:31.409836 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:30:31.409843 | orchestrator | 2026-03-13 00:30:31.409850 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-03-13 00:30:31.409856 | orchestrator | Friday 13 March 2026 00:30:30 +0000 (0:00:00.587) 0:03:38.567 ********** 2026-03-13 00:30:31.409863 | orchestrator | ok: [testbed-manager] 2026-03-13 00:30:31.409870 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:30:31.409876 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:30:31.409883 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:30:31.409889 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:30:31.409895 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:30:31.409901 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:30:31.409908 | orchestrator | 2026-03-13 00:30:31.409914 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-03-13 00:30:31.409921 | orchestrator | Friday 13 March 2026 00:30:30 +0000 (0:00:00.602) 0:03:39.169 ********** 2026-03-13 00:30:31.409933 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773360326.677645, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-13 00:30:31.409946 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773360331.82868, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-13 00:30:31.409953 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773360319.4883983, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-13 00:30:31.409974 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773360335.598835, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-13 00:30:36.937885 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773360338.5012028, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-13 00:30:36.937962 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773360345.385588, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-13 00:30:36.937969 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773360349.057307, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-13 00:30:36.937987 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-13 00:30:36.938080 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-13 00:30:36.938088 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-13 00:30:36.938092 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-13 00:30:36.938114 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-13 00:30:36.938119 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-13 00:30:36.938123 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-13 00:30:36.938127 | orchestrator | 2026-03-13 00:30:36.938132 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-03-13 00:30:36.938137 | orchestrator | Friday 13 March 2026 00:30:31 +0000 (0:00:01.089) 0:03:40.258 ********** 2026-03-13 00:30:36.938141 | orchestrator | changed: [testbed-manager] 2026-03-13 00:30:36.938147 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:30:36.938150 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:30:36.938160 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:30:36.938164 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:30:36.938168 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:30:36.938172 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:30:36.938176 | orchestrator | 2026-03-13 00:30:36.938181 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-03-13 00:30:36.938188 | orchestrator | Friday 13 March 2026 00:30:33 +0000 (0:00:01.198) 0:03:41.457 ********** 2026-03-13 00:30:36.938194 | orchestrator | changed: [testbed-manager] 2026-03-13 00:30:36.938200 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:30:36.938206 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:30:36.938215 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:30:36.938222 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:30:36.938228 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:30:36.938234 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:30:36.938246 | orchestrator | 2026-03-13 00:30:36.938261 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-03-13 00:30:36.938267 | orchestrator | Friday 13 March 2026 00:30:34 +0000 (0:00:01.282) 0:03:42.740 ********** 2026-03-13 00:30:36.938273 | orchestrator | changed: [testbed-manager] 2026-03-13 00:30:36.938280 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:30:36.938286 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:30:36.938292 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:30:36.938299 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:30:36.938305 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:30:36.938312 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:30:36.938318 | orchestrator | 2026-03-13 00:30:36.938325 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-03-13 00:30:36.938332 | orchestrator | Friday 13 March 2026 00:30:35 +0000 (0:00:01.156) 0:03:43.897 ********** 2026-03-13 00:30:36.938339 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:30:36.938345 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:30:36.938351 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:30:36.938357 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:30:36.938364 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:30:36.938371 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:30:36.938378 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:30:36.938384 | orchestrator | 2026-03-13 00:30:36.938390 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-03-13 00:30:36.938397 | orchestrator | Friday 13 March 2026 00:30:35 +0000 (0:00:00.273) 0:03:44.170 ********** 2026-03-13 00:30:36.938405 | orchestrator | ok: [testbed-manager] 2026-03-13 00:30:36.938413 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:30:36.938420 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:30:36.938427 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:30:36.938434 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:30:36.938438 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:30:36.938442 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:30:36.938446 | orchestrator | 2026-03-13 00:30:36.938451 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-03-13 00:30:36.938455 | orchestrator | Friday 13 March 2026 00:30:36 +0000 (0:00:00.726) 0:03:44.896 ********** 2026-03-13 00:30:36.938462 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:30:36.938468 | orchestrator | 2026-03-13 00:30:36.938472 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-03-13 00:30:36.938483 | orchestrator | Friday 13 March 2026 00:30:36 +0000 (0:00:00.371) 0:03:45.268 ********** 2026-03-13 00:31:52.562768 | orchestrator | ok: [testbed-manager] 2026-03-13 00:31:52.562858 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:31:52.562869 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:31:52.562896 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:31:52.562903 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:31:52.562909 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:31:52.562916 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:31:52.562922 | orchestrator | 2026-03-13 00:31:52.562930 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-03-13 00:31:52.562938 | orchestrator | Friday 13 March 2026 00:30:45 +0000 (0:00:08.390) 0:03:53.658 ********** 2026-03-13 00:31:52.562969 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:31:52.562981 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:31:52.562991 | orchestrator | ok: [testbed-manager] 2026-03-13 00:31:52.562998 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:31:52.563004 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:31:52.563010 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:31:52.563016 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:31:52.563022 | orchestrator | 2026-03-13 00:31:52.563029 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-03-13 00:31:52.563035 | orchestrator | Friday 13 March 2026 00:30:46 +0000 (0:00:01.431) 0:03:55.090 ********** 2026-03-13 00:31:52.563041 | orchestrator | ok: [testbed-manager] 2026-03-13 00:31:52.563047 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:31:52.563053 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:31:52.563059 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:31:52.563065 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:31:52.563071 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:31:52.563077 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:31:52.563083 | orchestrator | 2026-03-13 00:31:52.563089 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-03-13 00:31:52.563096 | orchestrator | Friday 13 March 2026 00:30:47 +0000 (0:00:00.987) 0:03:56.077 ********** 2026-03-13 00:31:52.563102 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:31:52.563108 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:31:52.563114 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:31:52.563120 | orchestrator | ok: [testbed-manager] 2026-03-13 00:31:52.563126 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:31:52.563132 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:31:52.563138 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:31:52.563144 | orchestrator | 2026-03-13 00:31:52.563150 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-03-13 00:31:52.563157 | orchestrator | Friday 13 March 2026 00:30:48 +0000 (0:00:00.298) 0:03:56.375 ********** 2026-03-13 00:31:52.563163 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:31:52.563169 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:31:52.563175 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:31:52.563181 | orchestrator | ok: [testbed-manager] 2026-03-13 00:31:52.563187 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:31:52.563193 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:31:52.563199 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:31:52.563205 | orchestrator | 2026-03-13 00:31:52.563211 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-03-13 00:31:52.563217 | orchestrator | Friday 13 March 2026 00:30:48 +0000 (0:00:00.271) 0:03:56.647 ********** 2026-03-13 00:31:52.563223 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:31:52.563230 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:31:52.563235 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:31:52.563242 | orchestrator | ok: [testbed-manager] 2026-03-13 00:31:52.563248 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:31:52.563254 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:31:52.563260 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:31:52.563266 | orchestrator | 2026-03-13 00:31:52.563272 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-03-13 00:31:52.563278 | orchestrator | Friday 13 March 2026 00:30:48 +0000 (0:00:00.328) 0:03:56.976 ********** 2026-03-13 00:31:52.563285 | orchestrator | ok: [testbed-manager] 2026-03-13 00:31:52.563291 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:31:52.563297 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:31:52.563309 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:31:52.563315 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:31:52.563321 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:31:52.563327 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:31:52.563335 | orchestrator | 2026-03-13 00:31:52.563342 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-03-13 00:31:52.563349 | orchestrator | Friday 13 March 2026 00:30:53 +0000 (0:00:05.315) 0:04:02.291 ********** 2026-03-13 00:31:52.563359 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:31:52.563368 | orchestrator | 2026-03-13 00:31:52.563375 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-03-13 00:31:52.563382 | orchestrator | Friday 13 March 2026 00:30:54 +0000 (0:00:00.345) 0:04:02.637 ********** 2026-03-13 00:31:52.563389 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-03-13 00:31:52.563396 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-03-13 00:31:52.563404 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:31:52.563411 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-03-13 00:31:52.563418 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-03-13 00:31:52.563425 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-03-13 00:31:52.563432 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-03-13 00:31:52.563439 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:31:52.563446 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-03-13 00:31:52.563452 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:31:52.563458 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-03-13 00:31:52.563464 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-03-13 00:31:52.563470 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-03-13 00:31:52.563476 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:31:52.563482 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-03-13 00:31:52.563489 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-03-13 00:31:52.563506 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:31:52.563513 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:31:52.563519 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-03-13 00:31:52.563525 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-03-13 00:31:52.563531 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:31:52.563538 | orchestrator | 2026-03-13 00:31:52.563544 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-03-13 00:31:52.563550 | orchestrator | Friday 13 March 2026 00:30:54 +0000 (0:00:00.307) 0:04:02.945 ********** 2026-03-13 00:31:52.563556 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:31:52.563563 | orchestrator | 2026-03-13 00:31:52.563569 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-03-13 00:31:52.563575 | orchestrator | Friday 13 March 2026 00:30:54 +0000 (0:00:00.353) 0:04:03.298 ********** 2026-03-13 00:31:52.563581 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-03-13 00:31:52.563587 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:31:52.563593 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-03-13 00:31:52.563601 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-03-13 00:31:52.563611 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:31:52.563621 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:31:52.563631 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-03-13 00:31:52.563647 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-03-13 00:31:52.563655 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:31:52.563664 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:31:52.563692 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-03-13 00:31:52.563702 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:31:52.563711 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-03-13 00:31:52.563720 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:31:52.563729 | orchestrator | 2026-03-13 00:31:52.563739 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-03-13 00:31:52.563749 | orchestrator | Friday 13 March 2026 00:30:55 +0000 (0:00:00.271) 0:04:03.569 ********** 2026-03-13 00:31:52.563759 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:31:52.563768 | orchestrator | 2026-03-13 00:31:52.563779 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-03-13 00:31:52.563790 | orchestrator | Friday 13 March 2026 00:30:55 +0000 (0:00:00.387) 0:04:03.957 ********** 2026-03-13 00:31:52.563805 | orchestrator | changed: [testbed-manager] 2026-03-13 00:31:52.563816 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:31:52.563827 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:31:52.563833 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:31:52.563839 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:31:52.563845 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:31:52.563851 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:31:52.563858 | orchestrator | 2026-03-13 00:31:52.563864 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-03-13 00:31:52.563870 | orchestrator | Friday 13 March 2026 00:31:27 +0000 (0:00:32.152) 0:04:36.109 ********** 2026-03-13 00:31:52.563876 | orchestrator | changed: [testbed-manager] 2026-03-13 00:31:52.563882 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:31:52.563888 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:31:52.563894 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:31:52.563900 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:31:52.563907 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:31:52.563915 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:31:52.563925 | orchestrator | 2026-03-13 00:31:52.563934 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-03-13 00:31:52.563943 | orchestrator | Friday 13 March 2026 00:31:36 +0000 (0:00:08.562) 0:04:44.671 ********** 2026-03-13 00:31:52.564005 | orchestrator | changed: [testbed-manager] 2026-03-13 00:31:52.564016 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:31:52.564027 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:31:52.564038 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:31:52.564047 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:31:52.564058 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:31:52.564065 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:31:52.564071 | orchestrator | 2026-03-13 00:31:52.564077 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-03-13 00:31:52.564083 | orchestrator | Friday 13 March 2026 00:31:44 +0000 (0:00:07.875) 0:04:52.547 ********** 2026-03-13 00:31:52.564089 | orchestrator | ok: [testbed-manager] 2026-03-13 00:31:52.564095 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:31:52.564101 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:31:52.564108 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:31:52.564115 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:31:52.564125 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:31:52.564136 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:31:52.564145 | orchestrator | 2026-03-13 00:31:52.564155 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-03-13 00:31:52.564173 | orchestrator | Friday 13 March 2026 00:31:46 +0000 (0:00:01.828) 0:04:54.376 ********** 2026-03-13 00:31:52.564183 | orchestrator | changed: [testbed-manager] 2026-03-13 00:31:52.564192 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:31:52.564202 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:31:52.564211 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:31:52.564220 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:31:52.564229 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:31:52.564238 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:31:52.564248 | orchestrator | 2026-03-13 00:31:52.564268 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-03-13 00:32:03.407825 | orchestrator | Friday 13 March 2026 00:31:52 +0000 (0:00:06.517) 0:05:00.893 ********** 2026-03-13 00:32:03.408008 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:32:03.408035 | orchestrator | 2026-03-13 00:32:03.408055 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-03-13 00:32:03.408074 | orchestrator | Friday 13 March 2026 00:31:52 +0000 (0:00:00.401) 0:05:01.295 ********** 2026-03-13 00:32:03.408092 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:32:03.408111 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:32:03.408128 | orchestrator | changed: [testbed-manager] 2026-03-13 00:32:03.408146 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:32:03.408164 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:32:03.408183 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:32:03.408201 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:32:03.408218 | orchestrator | 2026-03-13 00:32:03.408236 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-03-13 00:32:03.408254 | orchestrator | Friday 13 March 2026 00:31:53 +0000 (0:00:00.755) 0:05:02.050 ********** 2026-03-13 00:32:03.408272 | orchestrator | ok: [testbed-manager] 2026-03-13 00:32:03.408290 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:32:03.408309 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:32:03.408327 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:32:03.408349 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:32:03.408366 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:32:03.408385 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:32:03.408404 | orchestrator | 2026-03-13 00:32:03.408423 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-03-13 00:32:03.408445 | orchestrator | Friday 13 March 2026 00:31:55 +0000 (0:00:01.702) 0:05:03.753 ********** 2026-03-13 00:32:03.408467 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:32:03.408488 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:32:03.408510 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:32:03.408534 | orchestrator | changed: [testbed-manager] 2026-03-13 00:32:03.408555 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:32:03.408576 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:32:03.408598 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:32:03.408620 | orchestrator | 2026-03-13 00:32:03.408643 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-03-13 00:32:03.408666 | orchestrator | Friday 13 March 2026 00:31:56 +0000 (0:00:00.768) 0:05:04.521 ********** 2026-03-13 00:32:03.408688 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:32:03.408708 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:32:03.408725 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:32:03.408743 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:32:03.408759 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:32:03.408777 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:32:03.408795 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:32:03.408815 | orchestrator | 2026-03-13 00:32:03.408832 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-03-13 00:32:03.408897 | orchestrator | Friday 13 March 2026 00:31:56 +0000 (0:00:00.267) 0:05:04.788 ********** 2026-03-13 00:32:03.408917 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:32:03.409015 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:32:03.409039 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:32:03.409055 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:32:03.409071 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:32:03.409087 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:32:03.409103 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:32:03.409118 | orchestrator | 2026-03-13 00:32:03.409135 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-03-13 00:32:03.409152 | orchestrator | Friday 13 March 2026 00:31:56 +0000 (0:00:00.409) 0:05:05.198 ********** 2026-03-13 00:32:03.409167 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:32:03.409186 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:32:03.409202 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:32:03.409218 | orchestrator | ok: [testbed-manager] 2026-03-13 00:32:03.409235 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:32:03.409250 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:32:03.409266 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:32:03.409282 | orchestrator | 2026-03-13 00:32:03.409298 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-03-13 00:32:03.409315 | orchestrator | Friday 13 March 2026 00:31:57 +0000 (0:00:00.302) 0:05:05.500 ********** 2026-03-13 00:32:03.409332 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:32:03.409348 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:32:03.409365 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:32:03.409382 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:32:03.409398 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:32:03.409413 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:32:03.409430 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:32:03.409445 | orchestrator | 2026-03-13 00:32:03.409461 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-03-13 00:32:03.409479 | orchestrator | Friday 13 March 2026 00:31:57 +0000 (0:00:00.279) 0:05:05.780 ********** 2026-03-13 00:32:03.409494 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:32:03.409511 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:32:03.409526 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:32:03.409542 | orchestrator | ok: [testbed-manager] 2026-03-13 00:32:03.409558 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:32:03.409574 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:32:03.409589 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:32:03.409604 | orchestrator | 2026-03-13 00:32:03.409620 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-03-13 00:32:03.409636 | orchestrator | Friday 13 March 2026 00:31:57 +0000 (0:00:00.294) 0:05:06.074 ********** 2026-03-13 00:32:03.409651 | orchestrator | ok: [testbed-node-3] =>  2026-03-13 00:32:03.409667 | orchestrator |  docker_version: 5:27.5.1 2026-03-13 00:32:03.409682 | orchestrator | ok: [testbed-node-4] =>  2026-03-13 00:32:03.409698 | orchestrator |  docker_version: 5:27.5.1 2026-03-13 00:32:03.409716 | orchestrator | ok: [testbed-node-5] =>  2026-03-13 00:32:03.409732 | orchestrator |  docker_version: 5:27.5.1 2026-03-13 00:32:03.409748 | orchestrator | ok: [testbed-manager] =>  2026-03-13 00:32:03.409764 | orchestrator |  docker_version: 5:27.5.1 2026-03-13 00:32:03.409811 | orchestrator | ok: [testbed-node-0] =>  2026-03-13 00:32:03.409830 | orchestrator |  docker_version: 5:27.5.1 2026-03-13 00:32:03.409847 | orchestrator | ok: [testbed-node-1] =>  2026-03-13 00:32:03.409862 | orchestrator |  docker_version: 5:27.5.1 2026-03-13 00:32:03.409878 | orchestrator | ok: [testbed-node-2] =>  2026-03-13 00:32:03.409893 | orchestrator |  docker_version: 5:27.5.1 2026-03-13 00:32:03.409908 | orchestrator | 2026-03-13 00:32:03.409924 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-03-13 00:32:03.409974 | orchestrator | Friday 13 March 2026 00:31:58 +0000 (0:00:00.296) 0:05:06.370 ********** 2026-03-13 00:32:03.409992 | orchestrator | ok: [testbed-node-3] =>  2026-03-13 00:32:03.410104 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-13 00:32:03.410127 | orchestrator | ok: [testbed-node-4] =>  2026-03-13 00:32:03.410142 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-13 00:32:03.410157 | orchestrator | ok: [testbed-node-5] =>  2026-03-13 00:32:03.410173 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-13 00:32:03.410189 | orchestrator | ok: [testbed-manager] =>  2026-03-13 00:32:03.410205 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-13 00:32:03.410221 | orchestrator | ok: [testbed-node-0] =>  2026-03-13 00:32:03.410237 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-13 00:32:03.410253 | orchestrator | ok: [testbed-node-1] =>  2026-03-13 00:32:03.410269 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-13 00:32:03.410286 | orchestrator | ok: [testbed-node-2] =>  2026-03-13 00:32:03.410303 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-13 00:32:03.410319 | orchestrator | 2026-03-13 00:32:03.410336 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-03-13 00:32:03.410354 | orchestrator | Friday 13 March 2026 00:31:58 +0000 (0:00:00.276) 0:05:06.647 ********** 2026-03-13 00:32:03.410371 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:32:03.410388 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:32:03.410405 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:32:03.410424 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:32:03.410442 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:32:03.410459 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:32:03.410476 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:32:03.410494 | orchestrator | 2026-03-13 00:32:03.410513 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-03-13 00:32:03.410531 | orchestrator | Friday 13 March 2026 00:31:58 +0000 (0:00:00.270) 0:05:06.917 ********** 2026-03-13 00:32:03.410549 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:32:03.410566 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:32:03.410584 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:32:03.410601 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:32:03.410619 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:32:03.410637 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:32:03.410654 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:32:03.410671 | orchestrator | 2026-03-13 00:32:03.410689 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-03-13 00:32:03.410707 | orchestrator | Friday 13 March 2026 00:31:58 +0000 (0:00:00.254) 0:05:07.171 ********** 2026-03-13 00:32:03.410738 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:32:03.410761 | orchestrator | 2026-03-13 00:32:03.410778 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-03-13 00:32:03.410795 | orchestrator | Friday 13 March 2026 00:31:59 +0000 (0:00:00.504) 0:05:07.676 ********** 2026-03-13 00:32:03.410811 | orchestrator | ok: [testbed-manager] 2026-03-13 00:32:03.410829 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:32:03.410847 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:32:03.410865 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:32:03.410882 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:32:03.410898 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:32:03.410915 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:32:03.410961 | orchestrator | 2026-03-13 00:32:03.410980 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-03-13 00:32:03.410997 | orchestrator | Friday 13 March 2026 00:32:00 +0000 (0:00:00.845) 0:05:08.521 ********** 2026-03-13 00:32:03.411016 | orchestrator | ok: [testbed-manager] 2026-03-13 00:32:03.411033 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:32:03.411052 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:32:03.411069 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:32:03.411117 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:32:03.411137 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:32:03.411155 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:32:03.411172 | orchestrator | 2026-03-13 00:32:03.411190 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-03-13 00:32:03.411210 | orchestrator | Friday 13 March 2026 00:32:03 +0000 (0:00:02.865) 0:05:11.387 ********** 2026-03-13 00:32:03.411230 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-03-13 00:32:03.411283 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-03-13 00:32:03.411296 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-03-13 00:32:03.411307 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:32:03.411318 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-03-13 00:32:03.411329 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-03-13 00:32:03.411339 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-03-13 00:32:03.411350 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-03-13 00:32:03.411360 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-03-13 00:32:03.411372 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-03-13 00:32:03.411382 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:32:03.411393 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-03-13 00:32:03.411403 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-03-13 00:32:03.411414 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:32:03.411425 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-03-13 00:32:03.411436 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-03-13 00:32:03.411464 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-03-13 00:33:05.600225 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-03-13 00:33:05.600314 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:33:05.600326 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-03-13 00:33:05.600334 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-03-13 00:33:05.600342 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-03-13 00:33:05.600349 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:33:05.600357 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:33:05.600365 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-03-13 00:33:05.600372 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-03-13 00:33:05.600379 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-03-13 00:33:05.600387 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:33:05.600394 | orchestrator | 2026-03-13 00:33:05.600403 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-03-13 00:33:05.600412 | orchestrator | Friday 13 March 2026 00:32:03 +0000 (0:00:00.564) 0:05:11.952 ********** 2026-03-13 00:33:05.600419 | orchestrator | ok: [testbed-manager] 2026-03-13 00:33:05.600426 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:33:05.600434 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:33:05.600441 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:33:05.600448 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:33:05.600456 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:33:05.600463 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:33:05.600470 | orchestrator | 2026-03-13 00:33:05.600477 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-03-13 00:33:05.600485 | orchestrator | Friday 13 March 2026 00:32:10 +0000 (0:00:07.119) 0:05:19.071 ********** 2026-03-13 00:33:05.600492 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:33:05.600500 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:33:05.600507 | orchestrator | ok: [testbed-manager] 2026-03-13 00:33:05.600514 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:33:05.600521 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:33:05.600529 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:33:05.600556 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:33:05.600564 | orchestrator | 2026-03-13 00:33:05.600571 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-03-13 00:33:05.600578 | orchestrator | Friday 13 March 2026 00:32:11 +0000 (0:00:01.055) 0:05:20.126 ********** 2026-03-13 00:33:05.600586 | orchestrator | ok: [testbed-manager] 2026-03-13 00:33:05.600593 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:33:05.600600 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:33:05.600607 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:33:05.600614 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:33:05.600621 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:33:05.600628 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:33:05.600635 | orchestrator | 2026-03-13 00:33:05.600643 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-03-13 00:33:05.600650 | orchestrator | Friday 13 March 2026 00:32:20 +0000 (0:00:08.373) 0:05:28.500 ********** 2026-03-13 00:33:05.600658 | orchestrator | changed: [testbed-manager] 2026-03-13 00:33:05.600665 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:33:05.600684 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:33:05.600692 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:33:05.600699 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:33:05.600706 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:33:05.600713 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:33:05.600721 | orchestrator | 2026-03-13 00:33:05.600728 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-03-13 00:33:05.600735 | orchestrator | Friday 13 March 2026 00:32:23 +0000 (0:00:03.149) 0:05:31.649 ********** 2026-03-13 00:33:05.600742 | orchestrator | ok: [testbed-manager] 2026-03-13 00:33:05.600750 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:33:05.600757 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:33:05.600764 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:33:05.600771 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:33:05.600778 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:33:05.600786 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:33:05.600794 | orchestrator | 2026-03-13 00:33:05.600803 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-03-13 00:33:05.600811 | orchestrator | Friday 13 March 2026 00:32:24 +0000 (0:00:01.527) 0:05:33.177 ********** 2026-03-13 00:33:05.600820 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:33:05.600853 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:33:05.600862 | orchestrator | ok: [testbed-manager] 2026-03-13 00:33:05.600870 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:33:05.600878 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:33:05.600887 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:33:05.600895 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:33:05.600904 | orchestrator | 2026-03-13 00:33:05.600912 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-03-13 00:33:05.600921 | orchestrator | Friday 13 March 2026 00:32:26 +0000 (0:00:01.292) 0:05:34.469 ********** 2026-03-13 00:33:05.600928 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:33:05.600937 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:33:05.600946 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:33:05.600954 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:33:05.600962 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:33:05.600971 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:33:05.600979 | orchestrator | changed: [testbed-manager] 2026-03-13 00:33:05.600987 | orchestrator | 2026-03-13 00:33:05.600996 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-03-13 00:33:05.601004 | orchestrator | Friday 13 March 2026 00:32:26 +0000 (0:00:00.763) 0:05:35.232 ********** 2026-03-13 00:33:05.601013 | orchestrator | ok: [testbed-manager] 2026-03-13 00:33:05.601021 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:33:05.601029 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:33:05.601043 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:33:05.601050 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:33:05.601057 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:33:05.601064 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:33:05.601072 | orchestrator | 2026-03-13 00:33:05.601079 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-03-13 00:33:05.601099 | orchestrator | Friday 13 March 2026 00:32:36 +0000 (0:00:09.928) 0:05:45.161 ********** 2026-03-13 00:33:05.601107 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:33:05.601114 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:33:05.601121 | orchestrator | changed: [testbed-manager] 2026-03-13 00:33:05.601129 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:33:05.601136 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:33:05.601143 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:33:05.601150 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:33:05.601157 | orchestrator | 2026-03-13 00:33:05.601165 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-03-13 00:33:05.601172 | orchestrator | Friday 13 March 2026 00:32:37 +0000 (0:00:00.924) 0:05:46.085 ********** 2026-03-13 00:33:05.601179 | orchestrator | ok: [testbed-manager] 2026-03-13 00:33:05.601187 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:33:05.601194 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:33:05.601201 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:33:05.601208 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:33:05.601215 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:33:05.601223 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:33:05.601230 | orchestrator | 2026-03-13 00:33:05.601237 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-03-13 00:33:05.601244 | orchestrator | Friday 13 March 2026 00:32:47 +0000 (0:00:09.481) 0:05:55.567 ********** 2026-03-13 00:33:05.601252 | orchestrator | ok: [testbed-manager] 2026-03-13 00:33:05.601259 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:33:05.601266 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:33:05.601274 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:33:05.601281 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:33:05.601288 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:33:05.601295 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:33:05.601302 | orchestrator | 2026-03-13 00:33:05.601309 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-03-13 00:33:05.601317 | orchestrator | Friday 13 March 2026 00:32:58 +0000 (0:00:11.531) 0:06:07.099 ********** 2026-03-13 00:33:05.601324 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-03-13 00:33:05.601332 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-03-13 00:33:05.601339 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-03-13 00:33:05.601346 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-03-13 00:33:05.601353 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-03-13 00:33:05.601360 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-03-13 00:33:05.601368 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-03-13 00:33:05.601375 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-03-13 00:33:05.601382 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-03-13 00:33:05.601389 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-03-13 00:33:05.601396 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-03-13 00:33:05.601404 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-03-13 00:33:05.601411 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-03-13 00:33:05.601418 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-03-13 00:33:05.601425 | orchestrator | 2026-03-13 00:33:05.601433 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-03-13 00:33:05.601441 | orchestrator | Friday 13 March 2026 00:32:59 +0000 (0:00:01.218) 0:06:08.318 ********** 2026-03-13 00:33:05.601453 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:33:05.601460 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:33:05.601468 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:33:05.601475 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:33:05.601482 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:33:05.601489 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:33:05.601496 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:33:05.601504 | orchestrator | 2026-03-13 00:33:05.601511 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-03-13 00:33:05.601518 | orchestrator | Friday 13 March 2026 00:33:00 +0000 (0:00:00.546) 0:06:08.865 ********** 2026-03-13 00:33:05.601525 | orchestrator | ok: [testbed-manager] 2026-03-13 00:33:05.601533 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:33:05.601540 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:33:05.601547 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:33:05.601554 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:33:05.601561 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:33:05.601568 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:33:05.601576 | orchestrator | 2026-03-13 00:33:05.601583 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-03-13 00:33:05.601591 | orchestrator | Friday 13 March 2026 00:33:04 +0000 (0:00:04.119) 0:06:12.984 ********** 2026-03-13 00:33:05.601599 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:33:05.601606 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:33:05.601613 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:33:05.601621 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:33:05.601628 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:33:05.601635 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:33:05.601642 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:33:05.601649 | orchestrator | 2026-03-13 00:33:05.601657 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-03-13 00:33:05.601665 | orchestrator | Friday 13 March 2026 00:33:05 +0000 (0:00:00.664) 0:06:13.648 ********** 2026-03-13 00:33:05.601672 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-03-13 00:33:05.601679 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-03-13 00:33:05.601687 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:33:05.601725 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-03-13 00:33:05.601733 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-03-13 00:33:05.601740 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:33:05.601747 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-03-13 00:33:05.601754 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-03-13 00:33:05.601761 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:33:05.601774 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-03-13 00:33:24.672611 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-03-13 00:33:24.672741 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:33:24.672766 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-03-13 00:33:24.672786 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-03-13 00:33:24.672867 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:33:24.672878 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-03-13 00:33:24.672888 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-03-13 00:33:24.672897 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:33:24.672907 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-03-13 00:33:24.672917 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-03-13 00:33:24.672926 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:33:24.672936 | orchestrator | 2026-03-13 00:33:24.672947 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-03-13 00:33:24.672983 | orchestrator | Friday 13 March 2026 00:33:05 +0000 (0:00:00.560) 0:06:14.209 ********** 2026-03-13 00:33:24.672993 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:33:24.673003 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:33:24.673012 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:33:24.673025 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:33:24.673044 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:33:24.673060 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:33:24.673076 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:33:24.673094 | orchestrator | 2026-03-13 00:33:24.673112 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-03-13 00:33:24.673131 | orchestrator | Friday 13 March 2026 00:33:06 +0000 (0:00:00.504) 0:06:14.714 ********** 2026-03-13 00:33:24.673148 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:33:24.673162 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:33:24.673179 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:33:24.673194 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:33:24.673211 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:33:24.673225 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:33:24.673241 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:33:24.673257 | orchestrator | 2026-03-13 00:33:24.673273 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-03-13 00:33:24.673289 | orchestrator | Friday 13 March 2026 00:33:06 +0000 (0:00:00.519) 0:06:15.233 ********** 2026-03-13 00:33:24.673304 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:33:24.673321 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:33:24.673338 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:33:24.673355 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:33:24.673371 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:33:24.673388 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:33:24.673398 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:33:24.673407 | orchestrator | 2026-03-13 00:33:24.673417 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-03-13 00:33:24.673441 | orchestrator | Friday 13 March 2026 00:33:07 +0000 (0:00:00.513) 0:06:15.747 ********** 2026-03-13 00:33:24.673451 | orchestrator | ok: [testbed-manager] 2026-03-13 00:33:24.673460 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:33:24.673470 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:33:24.673479 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:33:24.673488 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:33:24.673497 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:33:24.673507 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:33:24.673516 | orchestrator | 2026-03-13 00:33:24.673525 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-03-13 00:33:24.673535 | orchestrator | Friday 13 March 2026 00:33:09 +0000 (0:00:02.031) 0:06:17.778 ********** 2026-03-13 00:33:24.673545 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:33:24.673557 | orchestrator | 2026-03-13 00:33:24.673567 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-03-13 00:33:24.673576 | orchestrator | Friday 13 March 2026 00:33:10 +0000 (0:00:00.831) 0:06:18.610 ********** 2026-03-13 00:33:24.673586 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:33:24.673595 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:33:24.673605 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:33:24.673614 | orchestrator | ok: [testbed-manager] 2026-03-13 00:33:24.673623 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:33:24.673633 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:33:24.673643 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:33:24.673652 | orchestrator | 2026-03-13 00:33:24.673662 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-03-13 00:33:24.673682 | orchestrator | Friday 13 March 2026 00:33:11 +0000 (0:00:00.834) 0:06:19.445 ********** 2026-03-13 00:33:24.673691 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:33:24.673700 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:33:24.673710 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:33:24.673719 | orchestrator | ok: [testbed-manager] 2026-03-13 00:33:24.673728 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:33:24.673738 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:33:24.673747 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:33:24.673757 | orchestrator | 2026-03-13 00:33:24.673766 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-03-13 00:33:24.673776 | orchestrator | Friday 13 March 2026 00:33:12 +0000 (0:00:01.045) 0:06:20.490 ********** 2026-03-13 00:33:24.673785 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:33:24.673819 | orchestrator | ok: [testbed-manager] 2026-03-13 00:33:24.673830 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:33:24.673839 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:33:24.673849 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:33:24.673858 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:33:24.673867 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:33:24.673877 | orchestrator | 2026-03-13 00:33:24.673886 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-03-13 00:33:24.673916 | orchestrator | Friday 13 March 2026 00:33:13 +0000 (0:00:01.358) 0:06:21.849 ********** 2026-03-13 00:33:24.673926 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:33:24.673936 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:33:24.673945 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:33:24.673955 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:33:24.673964 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:33:24.673974 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:33:24.673983 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:33:24.673993 | orchestrator | 2026-03-13 00:33:24.674002 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-03-13 00:33:24.674012 | orchestrator | Friday 13 March 2026 00:33:15 +0000 (0:00:01.570) 0:06:23.419 ********** 2026-03-13 00:33:24.674086 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:33:24.674096 | orchestrator | ok: [testbed-manager] 2026-03-13 00:33:24.674105 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:33:24.674115 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:33:24.674124 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:33:24.674145 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:33:24.674164 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:33:24.674174 | orchestrator | 2026-03-13 00:33:24.674184 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-03-13 00:33:24.674193 | orchestrator | Friday 13 March 2026 00:33:16 +0000 (0:00:01.295) 0:06:24.715 ********** 2026-03-13 00:33:24.674203 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:33:24.674213 | orchestrator | changed: [testbed-manager] 2026-03-13 00:33:24.674222 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:33:24.674232 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:33:24.674241 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:33:24.674250 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:33:24.674260 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:33:24.674269 | orchestrator | 2026-03-13 00:33:24.674279 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-03-13 00:33:24.674289 | orchestrator | Friday 13 March 2026 00:33:17 +0000 (0:00:01.413) 0:06:26.128 ********** 2026-03-13 00:33:24.674299 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:33:24.674309 | orchestrator | 2026-03-13 00:33:24.674318 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-03-13 00:33:24.674328 | orchestrator | Friday 13 March 2026 00:33:18 +0000 (0:00:00.990) 0:06:27.119 ********** 2026-03-13 00:33:24.674350 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:33:24.674360 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:33:24.674369 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:33:24.674379 | orchestrator | ok: [testbed-manager] 2026-03-13 00:33:24.674388 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:33:24.674397 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:33:24.674407 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:33:24.674416 | orchestrator | 2026-03-13 00:33:24.674426 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-03-13 00:33:24.674436 | orchestrator | Friday 13 March 2026 00:33:20 +0000 (0:00:01.346) 0:06:28.466 ********** 2026-03-13 00:33:24.674445 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:33:24.674455 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:33:24.674464 | orchestrator | ok: [testbed-manager] 2026-03-13 00:33:24.674474 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:33:24.674484 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:33:24.674501 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:33:24.674517 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:33:24.674533 | orchestrator | 2026-03-13 00:33:24.674550 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-03-13 00:33:24.674565 | orchestrator | Friday 13 March 2026 00:33:21 +0000 (0:00:01.174) 0:06:29.640 ********** 2026-03-13 00:33:24.674583 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:33:24.674597 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:33:24.674612 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:33:24.674628 | orchestrator | ok: [testbed-manager] 2026-03-13 00:33:24.674642 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:33:24.674659 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:33:24.674676 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:33:24.674691 | orchestrator | 2026-03-13 00:33:24.674707 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-03-13 00:33:24.674724 | orchestrator | Friday 13 March 2026 00:33:22 +0000 (0:00:01.097) 0:06:30.738 ********** 2026-03-13 00:33:24.674738 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:33:24.674753 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:33:24.674768 | orchestrator | ok: [testbed-manager] 2026-03-13 00:33:24.674785 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:33:24.674866 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:33:24.674884 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:33:24.674901 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:33:24.674917 | orchestrator | 2026-03-13 00:33:24.674934 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-03-13 00:33:24.674949 | orchestrator | Friday 13 March 2026 00:33:23 +0000 (0:00:01.305) 0:06:32.043 ********** 2026-03-13 00:33:24.674964 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:33:24.674981 | orchestrator | 2026-03-13 00:33:24.674997 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-13 00:33:24.675013 | orchestrator | Friday 13 March 2026 00:33:24 +0000 (0:00:00.835) 0:06:32.879 ********** 2026-03-13 00:33:24.675029 | orchestrator | 2026-03-13 00:33:24.675045 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-13 00:33:24.675058 | orchestrator | Friday 13 March 2026 00:33:24 +0000 (0:00:00.037) 0:06:32.916 ********** 2026-03-13 00:33:24.675072 | orchestrator | 2026-03-13 00:33:24.675088 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-13 00:33:24.675104 | orchestrator | Friday 13 March 2026 00:33:24 +0000 (0:00:00.036) 0:06:32.953 ********** 2026-03-13 00:33:24.675121 | orchestrator | 2026-03-13 00:33:24.675138 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-13 00:33:24.675172 | orchestrator | Friday 13 March 2026 00:33:24 +0000 (0:00:00.050) 0:06:33.003 ********** 2026-03-13 00:33:49.274464 | orchestrator | 2026-03-13 00:33:49.274588 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-13 00:33:49.274606 | orchestrator | Friday 13 March 2026 00:33:24 +0000 (0:00:00.039) 0:06:33.043 ********** 2026-03-13 00:33:49.274618 | orchestrator | 2026-03-13 00:33:49.274629 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-13 00:33:49.274640 | orchestrator | Friday 13 March 2026 00:33:24 +0000 (0:00:00.040) 0:06:33.083 ********** 2026-03-13 00:33:49.274651 | orchestrator | 2026-03-13 00:33:49.274661 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-13 00:33:49.274672 | orchestrator | Friday 13 March 2026 00:33:24 +0000 (0:00:00.048) 0:06:33.131 ********** 2026-03-13 00:33:49.274683 | orchestrator | 2026-03-13 00:33:49.274693 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-13 00:33:49.274704 | orchestrator | Friday 13 March 2026 00:33:24 +0000 (0:00:00.038) 0:06:33.170 ********** 2026-03-13 00:33:49.274715 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:33:49.274726 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:33:49.274737 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:33:49.274793 | orchestrator | 2026-03-13 00:33:49.274805 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-03-13 00:33:49.274816 | orchestrator | Friday 13 March 2026 00:33:26 +0000 (0:00:01.196) 0:06:34.367 ********** 2026-03-13 00:33:49.274827 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:33:49.274838 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:33:49.274849 | orchestrator | changed: [testbed-manager] 2026-03-13 00:33:49.274860 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:33:49.274870 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:33:49.274881 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:33:49.274892 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:33:49.274902 | orchestrator | 2026-03-13 00:33:49.274913 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-03-13 00:33:49.274924 | orchestrator | Friday 13 March 2026 00:33:27 +0000 (0:00:01.380) 0:06:35.747 ********** 2026-03-13 00:33:49.274935 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:33:49.274945 | orchestrator | changed: [testbed-manager] 2026-03-13 00:33:49.274956 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:33:49.274967 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:33:49.274977 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:33:49.274990 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:33:49.275002 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:33:49.275014 | orchestrator | 2026-03-13 00:33:49.275026 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-03-13 00:33:49.275038 | orchestrator | Friday 13 March 2026 00:33:28 +0000 (0:00:01.198) 0:06:36.946 ********** 2026-03-13 00:33:49.275050 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:33:49.275063 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:33:49.275075 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:33:49.275087 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:33:49.275100 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:33:49.275113 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:33:49.275125 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:33:49.275137 | orchestrator | 2026-03-13 00:33:49.275162 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-03-13 00:33:49.275175 | orchestrator | Friday 13 March 2026 00:33:30 +0000 (0:00:02.370) 0:06:39.317 ********** 2026-03-13 00:33:49.275187 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:33:49.275199 | orchestrator | 2026-03-13 00:33:49.275211 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-03-13 00:33:49.275224 | orchestrator | Friday 13 March 2026 00:33:31 +0000 (0:00:00.089) 0:06:39.406 ********** 2026-03-13 00:33:49.275236 | orchestrator | ok: [testbed-manager] 2026-03-13 00:33:49.275249 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:33:49.275261 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:33:49.275273 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:33:49.275294 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:33:49.275306 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:33:49.275318 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:33:49.275330 | orchestrator | 2026-03-13 00:33:49.275342 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-03-13 00:33:49.275354 | orchestrator | Friday 13 March 2026 00:33:32 +0000 (0:00:00.935) 0:06:40.342 ********** 2026-03-13 00:33:49.275364 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:33:49.275375 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:33:49.275386 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:33:49.275396 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:33:49.275407 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:33:49.275417 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:33:49.275428 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:33:49.275438 | orchestrator | 2026-03-13 00:33:49.275449 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-03-13 00:33:49.275460 | orchestrator | Friday 13 March 2026 00:33:32 +0000 (0:00:00.551) 0:06:40.894 ********** 2026-03-13 00:33:49.275471 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:33:49.275484 | orchestrator | 2026-03-13 00:33:49.275495 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-03-13 00:33:49.275506 | orchestrator | Friday 13 March 2026 00:33:33 +0000 (0:00:00.742) 0:06:41.636 ********** 2026-03-13 00:33:49.275517 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:33:49.275527 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:33:49.275538 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:33:49.275549 | orchestrator | ok: [testbed-manager] 2026-03-13 00:33:49.275559 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:33:49.275570 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:33:49.275580 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:33:49.275591 | orchestrator | 2026-03-13 00:33:49.275602 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-03-13 00:33:49.275612 | orchestrator | Friday 13 March 2026 00:33:34 +0000 (0:00:00.783) 0:06:42.420 ********** 2026-03-13 00:33:49.275623 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-03-13 00:33:49.275653 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-03-13 00:33:49.275665 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-03-13 00:33:49.275676 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-03-13 00:33:49.275686 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-03-13 00:33:49.275697 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-03-13 00:33:49.275708 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-03-13 00:33:49.275719 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-03-13 00:33:49.275729 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-03-13 00:33:49.275740 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-03-13 00:33:49.275770 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-03-13 00:33:49.275781 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-03-13 00:33:49.275792 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-03-13 00:33:49.275802 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-03-13 00:33:49.275813 | orchestrator | 2026-03-13 00:33:49.275824 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-03-13 00:33:49.275835 | orchestrator | Friday 13 March 2026 00:33:36 +0000 (0:00:02.551) 0:06:44.971 ********** 2026-03-13 00:33:49.275846 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:33:49.275856 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:33:49.275867 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:33:49.275885 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:33:49.275895 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:33:49.275906 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:33:49.275917 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:33:49.275927 | orchestrator | 2026-03-13 00:33:49.275938 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-03-13 00:33:49.275950 | orchestrator | Friday 13 March 2026 00:33:37 +0000 (0:00:00.424) 0:06:45.396 ********** 2026-03-13 00:33:49.275975 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:33:49.275997 | orchestrator | 2026-03-13 00:33:49.276009 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-03-13 00:33:49.276020 | orchestrator | Friday 13 March 2026 00:33:37 +0000 (0:00:00.700) 0:06:46.097 ********** 2026-03-13 00:33:49.276030 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:33:49.276041 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:33:49.276052 | orchestrator | ok: [testbed-manager] 2026-03-13 00:33:49.276063 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:33:49.276073 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:33:49.276084 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:33:49.276095 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:33:49.276105 | orchestrator | 2026-03-13 00:33:49.276122 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-03-13 00:33:49.276133 | orchestrator | Friday 13 March 2026 00:33:38 +0000 (0:00:00.739) 0:06:46.836 ********** 2026-03-13 00:33:49.276144 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:33:49.276155 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:33:49.276165 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:33:49.276176 | orchestrator | ok: [testbed-manager] 2026-03-13 00:33:49.276187 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:33:49.276197 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:33:49.276208 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:33:49.276219 | orchestrator | 2026-03-13 00:33:49.276230 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-03-13 00:33:49.276241 | orchestrator | Friday 13 March 2026 00:33:39 +0000 (0:00:00.909) 0:06:47.745 ********** 2026-03-13 00:33:49.276251 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:33:49.276262 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:33:49.276273 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:33:49.276284 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:33:49.276295 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:33:49.276305 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:33:49.276316 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:33:49.276327 | orchestrator | 2026-03-13 00:33:49.276338 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-03-13 00:33:49.276348 | orchestrator | Friday 13 March 2026 00:33:39 +0000 (0:00:00.419) 0:06:48.165 ********** 2026-03-13 00:33:49.276359 | orchestrator | ok: [testbed-manager] 2026-03-13 00:33:49.276370 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:33:49.276381 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:33:49.276391 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:33:49.276402 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:33:49.276413 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:33:49.276423 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:33:49.276434 | orchestrator | 2026-03-13 00:33:49.276445 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-03-13 00:33:49.276456 | orchestrator | Friday 13 March 2026 00:33:41 +0000 (0:00:01.491) 0:06:49.656 ********** 2026-03-13 00:33:49.276467 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:33:49.276478 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:33:49.276488 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:33:49.276499 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:33:49.276517 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:33:49.276528 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:33:49.276539 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:33:49.276550 | orchestrator | 2026-03-13 00:33:49.276560 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-03-13 00:33:49.276571 | orchestrator | Friday 13 March 2026 00:33:41 +0000 (0:00:00.458) 0:06:50.114 ********** 2026-03-13 00:33:49.276582 | orchestrator | ok: [testbed-manager] 2026-03-13 00:33:49.276593 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:33:49.276603 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:33:49.276614 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:33:49.276625 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:33:49.276636 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:33:49.276653 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:34:20.751351 | orchestrator | 2026-03-13 00:34:20.751464 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-03-13 00:34:20.751481 | orchestrator | Friday 13 March 2026 00:33:49 +0000 (0:00:07.713) 0:06:57.828 ********** 2026-03-13 00:34:20.751495 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:34:20.751506 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:34:20.751518 | orchestrator | ok: [testbed-manager] 2026-03-13 00:34:20.751529 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:34:20.751540 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:34:20.751551 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:34:20.751562 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:34:20.751573 | orchestrator | 2026-03-13 00:34:20.751584 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-03-13 00:34:20.751595 | orchestrator | Friday 13 March 2026 00:33:50 +0000 (0:00:01.323) 0:06:59.151 ********** 2026-03-13 00:34:20.751606 | orchestrator | ok: [testbed-manager] 2026-03-13 00:34:20.751617 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:34:20.751627 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:34:20.751638 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:34:20.751649 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:34:20.751660 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:34:20.751671 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:34:20.751682 | orchestrator | 2026-03-13 00:34:20.751717 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-03-13 00:34:20.751730 | orchestrator | Friday 13 March 2026 00:33:52 +0000 (0:00:01.934) 0:07:01.086 ********** 2026-03-13 00:34:20.751741 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:34:20.751752 | orchestrator | ok: [testbed-manager] 2026-03-13 00:34:20.751762 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:34:20.751773 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:34:20.751784 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:34:20.751795 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:34:20.751805 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:34:20.751816 | orchestrator | 2026-03-13 00:34:20.751827 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-13 00:34:20.751838 | orchestrator | Friday 13 March 2026 00:33:54 +0000 (0:00:01.642) 0:07:02.729 ********** 2026-03-13 00:34:20.751849 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:34:20.751860 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:34:20.751871 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:34:20.751882 | orchestrator | ok: [testbed-manager] 2026-03-13 00:34:20.751894 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:34:20.751906 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:34:20.751918 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:34:20.751930 | orchestrator | 2026-03-13 00:34:20.751943 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-13 00:34:20.751955 | orchestrator | Friday 13 March 2026 00:33:55 +0000 (0:00:01.164) 0:07:03.893 ********** 2026-03-13 00:34:20.751967 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:34:20.751979 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:34:20.752025 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:34:20.752039 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:34:20.752052 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:34:20.752064 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:34:20.752077 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:34:20.752090 | orchestrator | 2026-03-13 00:34:20.752103 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-03-13 00:34:20.752116 | orchestrator | Friday 13 March 2026 00:33:56 +0000 (0:00:00.784) 0:07:04.678 ********** 2026-03-13 00:34:20.752129 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:34:20.752141 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:34:20.752153 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:34:20.752166 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:34:20.752178 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:34:20.752190 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:34:20.752203 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:34:20.752215 | orchestrator | 2026-03-13 00:34:20.752228 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-03-13 00:34:20.752241 | orchestrator | Friday 13 March 2026 00:33:56 +0000 (0:00:00.476) 0:07:05.155 ********** 2026-03-13 00:34:20.752253 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:34:20.752263 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:34:20.752274 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:34:20.752285 | orchestrator | ok: [testbed-manager] 2026-03-13 00:34:20.752296 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:34:20.752306 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:34:20.752317 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:34:20.752328 | orchestrator | 2026-03-13 00:34:20.752339 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-03-13 00:34:20.752349 | orchestrator | Friday 13 March 2026 00:33:57 +0000 (0:00:00.496) 0:07:05.651 ********** 2026-03-13 00:34:20.752360 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:34:20.752371 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:34:20.752381 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:34:20.752392 | orchestrator | ok: [testbed-manager] 2026-03-13 00:34:20.752403 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:34:20.752413 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:34:20.752424 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:34:20.752435 | orchestrator | 2026-03-13 00:34:20.752445 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-03-13 00:34:20.752456 | orchestrator | Friday 13 March 2026 00:33:57 +0000 (0:00:00.652) 0:07:06.304 ********** 2026-03-13 00:34:20.752467 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:34:20.752478 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:34:20.752488 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:34:20.752499 | orchestrator | ok: [testbed-manager] 2026-03-13 00:34:20.752510 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:34:20.752520 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:34:20.752531 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:34:20.752541 | orchestrator | 2026-03-13 00:34:20.752552 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-03-13 00:34:20.752563 | orchestrator | Friday 13 March 2026 00:33:58 +0000 (0:00:00.521) 0:07:06.825 ********** 2026-03-13 00:34:20.752574 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:34:20.752584 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:34:20.752595 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:34:20.752606 | orchestrator | ok: [testbed-manager] 2026-03-13 00:34:20.752617 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:34:20.752627 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:34:20.752638 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:34:20.752649 | orchestrator | 2026-03-13 00:34:20.752677 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-03-13 00:34:20.752689 | orchestrator | Friday 13 March 2026 00:34:03 +0000 (0:00:04.912) 0:07:11.738 ********** 2026-03-13 00:34:20.752720 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:34:20.752741 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:34:20.752752 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:34:20.752763 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:34:20.752791 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:34:20.752803 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:34:20.752814 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:34:20.752825 | orchestrator | 2026-03-13 00:34:20.752835 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-03-13 00:34:20.752846 | orchestrator | Friday 13 March 2026 00:34:03 +0000 (0:00:00.509) 0:07:12.248 ********** 2026-03-13 00:34:20.752859 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:34:20.752873 | orchestrator | 2026-03-13 00:34:20.752884 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-03-13 00:34:20.752895 | orchestrator | Friday 13 March 2026 00:34:04 +0000 (0:00:00.942) 0:07:13.190 ********** 2026-03-13 00:34:20.752906 | orchestrator | ok: [testbed-manager] 2026-03-13 00:34:20.752917 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:34:20.752928 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:34:20.752939 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:34:20.752949 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:34:20.752960 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:34:20.752971 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:34:20.752982 | orchestrator | 2026-03-13 00:34:20.752993 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-03-13 00:34:20.753004 | orchestrator | Friday 13 March 2026 00:34:06 +0000 (0:00:01.902) 0:07:15.092 ********** 2026-03-13 00:34:20.753014 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:34:20.753025 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:34:20.753036 | orchestrator | ok: [testbed-manager] 2026-03-13 00:34:20.753046 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:34:20.753057 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:34:20.753068 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:34:20.753078 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:34:20.753089 | orchestrator | 2026-03-13 00:34:20.753100 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-03-13 00:34:20.753111 | orchestrator | Friday 13 March 2026 00:34:07 +0000 (0:00:01.102) 0:07:16.195 ********** 2026-03-13 00:34:20.753122 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:34:20.753133 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:34:20.753144 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:34:20.753155 | orchestrator | ok: [testbed-manager] 2026-03-13 00:34:20.753165 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:34:20.753176 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:34:20.753187 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:34:20.753198 | orchestrator | 2026-03-13 00:34:20.753208 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-03-13 00:34:20.753224 | orchestrator | Friday 13 March 2026 00:34:08 +0000 (0:00:00.824) 0:07:17.020 ********** 2026-03-13 00:34:20.753236 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-13 00:34:20.753249 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-13 00:34:20.753260 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-13 00:34:20.753271 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-13 00:34:20.753282 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-13 00:34:20.753301 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-13 00:34:20.753312 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-13 00:34:20.753323 | orchestrator | 2026-03-13 00:34:20.753334 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-03-13 00:34:20.753345 | orchestrator | Friday 13 March 2026 00:34:10 +0000 (0:00:01.976) 0:07:18.996 ********** 2026-03-13 00:34:20.753356 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:34:20.753367 | orchestrator | 2026-03-13 00:34:20.753378 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-03-13 00:34:20.753389 | orchestrator | Friday 13 March 2026 00:34:11 +0000 (0:00:00.849) 0:07:19.845 ********** 2026-03-13 00:34:20.753400 | orchestrator | changed: [testbed-manager] 2026-03-13 00:34:20.753411 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:34:20.753422 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:34:20.753432 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:34:20.753443 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:34:20.753454 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:34:20.753465 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:34:20.753475 | orchestrator | 2026-03-13 00:34:20.753493 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-03-13 00:34:51.004303 | orchestrator | Friday 13 March 2026 00:34:20 +0000 (0:00:09.236) 0:07:29.082 ********** 2026-03-13 00:34:51.004412 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:34:51.004429 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:34:51.004440 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:34:51.004452 | orchestrator | ok: [testbed-manager] 2026-03-13 00:34:51.004462 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:34:51.004473 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:34:51.004484 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:34:51.004495 | orchestrator | 2026-03-13 00:34:51.004507 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-03-13 00:34:51.004519 | orchestrator | Friday 13 March 2026 00:34:22 +0000 (0:00:01.898) 0:07:30.981 ********** 2026-03-13 00:34:51.004530 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:34:51.004541 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:34:51.004552 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:34:51.004562 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:34:51.004573 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:34:51.004584 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:34:51.004595 | orchestrator | 2026-03-13 00:34:51.004607 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-03-13 00:34:51.004618 | orchestrator | Friday 13 March 2026 00:34:24 +0000 (0:00:01.391) 0:07:32.372 ********** 2026-03-13 00:34:51.004629 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:34:51.004718 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:34:51.004734 | orchestrator | changed: [testbed-manager] 2026-03-13 00:34:51.004753 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:34:51.004773 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:34:51.004791 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:34:51.004811 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:34:51.004830 | orchestrator | 2026-03-13 00:34:51.004851 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-03-13 00:34:51.004871 | orchestrator | 2026-03-13 00:34:51.004891 | orchestrator | TASK [Include hardening role] ************************************************** 2026-03-13 00:34:51.004910 | orchestrator | Friday 13 March 2026 00:34:25 +0000 (0:00:01.253) 0:07:33.625 ********** 2026-03-13 00:34:51.004932 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:34:51.004952 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:34:51.005005 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:34:51.005025 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:34:51.005038 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:34:51.005051 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:34:51.005064 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:34:51.005076 | orchestrator | 2026-03-13 00:34:51.005089 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-03-13 00:34:51.005102 | orchestrator | 2026-03-13 00:34:51.005115 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-03-13 00:34:51.005128 | orchestrator | Friday 13 March 2026 00:34:25 +0000 (0:00:00.684) 0:07:34.309 ********** 2026-03-13 00:34:51.005140 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:34:51.005158 | orchestrator | changed: [testbed-manager] 2026-03-13 00:34:51.005176 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:34:51.005194 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:34:51.005213 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:34:51.005250 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:34:51.005271 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:34:51.005288 | orchestrator | 2026-03-13 00:34:51.005305 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-03-13 00:34:51.005322 | orchestrator | Friday 13 March 2026 00:34:27 +0000 (0:00:01.378) 0:07:35.688 ********** 2026-03-13 00:34:51.005340 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:34:51.005359 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:34:51.005379 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:34:51.005397 | orchestrator | ok: [testbed-manager] 2026-03-13 00:34:51.005414 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:34:51.005425 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:34:51.005435 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:34:51.005446 | orchestrator | 2026-03-13 00:34:51.005457 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-03-13 00:34:51.005476 | orchestrator | Friday 13 March 2026 00:34:28 +0000 (0:00:01.420) 0:07:37.109 ********** 2026-03-13 00:34:51.005495 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:34:51.005513 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:34:51.005531 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:34:51.005547 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:34:51.005565 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:34:51.005584 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:34:51.005603 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:34:51.005621 | orchestrator | 2026-03-13 00:34:51.005670 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-03-13 00:34:51.005691 | orchestrator | Friday 13 March 2026 00:34:29 +0000 (0:00:00.626) 0:07:37.736 ********** 2026-03-13 00:34:51.005711 | orchestrator | included: osism.services.smartd for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:34:51.005732 | orchestrator | 2026-03-13 00:34:51.005751 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-03-13 00:34:51.005769 | orchestrator | Friday 13 March 2026 00:34:30 +0000 (0:00:00.694) 0:07:38.430 ********** 2026-03-13 00:34:51.005788 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:34:51.005809 | orchestrator | 2026-03-13 00:34:51.005827 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-03-13 00:34:51.005846 | orchestrator | Friday 13 March 2026 00:34:30 +0000 (0:00:00.663) 0:07:39.094 ********** 2026-03-13 00:34:51.005866 | orchestrator | changed: [testbed-manager] 2026-03-13 00:34:51.005885 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:34:51.005904 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:34:51.005922 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:34:51.005958 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:34:51.005975 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:34:51.005986 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:34:51.005997 | orchestrator | 2026-03-13 00:34:51.006102 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-03-13 00:34:51.006119 | orchestrator | Friday 13 March 2026 00:34:39 +0000 (0:00:09.124) 0:07:48.219 ********** 2026-03-13 00:34:51.006130 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:34:51.006141 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:34:51.006152 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:34:51.006163 | orchestrator | changed: [testbed-manager] 2026-03-13 00:34:51.006173 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:34:51.006184 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:34:51.006194 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:34:51.006205 | orchestrator | 2026-03-13 00:34:51.006216 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-03-13 00:34:51.006227 | orchestrator | Friday 13 March 2026 00:34:40 +0000 (0:00:00.834) 0:07:49.053 ********** 2026-03-13 00:34:51.006238 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:34:51.006249 | orchestrator | changed: [testbed-manager] 2026-03-13 00:34:51.006260 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:34:51.006271 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:34:51.006281 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:34:51.006292 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:34:51.006304 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:34:51.006324 | orchestrator | 2026-03-13 00:34:51.006344 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-03-13 00:34:51.006365 | orchestrator | Friday 13 March 2026 00:34:42 +0000 (0:00:01.374) 0:07:50.428 ********** 2026-03-13 00:34:51.006385 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:34:51.006404 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:34:51.006425 | orchestrator | changed: [testbed-manager] 2026-03-13 00:34:51.006446 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:34:51.006467 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:34:51.006486 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:34:51.006502 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:34:51.006513 | orchestrator | 2026-03-13 00:34:51.006523 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-03-13 00:34:51.006535 | orchestrator | Friday 13 March 2026 00:34:43 +0000 (0:00:01.873) 0:07:52.301 ********** 2026-03-13 00:34:51.006545 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:34:51.006556 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:34:51.006567 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:34:51.006577 | orchestrator | changed: [testbed-manager] 2026-03-13 00:34:51.006588 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:34:51.006599 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:34:51.006609 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:34:51.006620 | orchestrator | 2026-03-13 00:34:51.006631 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-03-13 00:34:51.006670 | orchestrator | Friday 13 March 2026 00:34:45 +0000 (0:00:01.249) 0:07:53.551 ********** 2026-03-13 00:34:51.006683 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:34:51.006693 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:34:51.006704 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:34:51.006715 | orchestrator | changed: [testbed-manager] 2026-03-13 00:34:51.006726 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:34:51.006745 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:34:51.006756 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:34:51.006767 | orchestrator | 2026-03-13 00:34:51.006778 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-03-13 00:34:51.006788 | orchestrator | 2026-03-13 00:34:51.006799 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-03-13 00:34:51.006810 | orchestrator | Friday 13 March 2026 00:34:46 +0000 (0:00:01.117) 0:07:54.669 ********** 2026-03-13 00:34:51.006831 | orchestrator | included: osism.commons.state for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:34:51.006842 | orchestrator | 2026-03-13 00:34:51.006853 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-13 00:34:51.006864 | orchestrator | Friday 13 March 2026 00:34:47 +0000 (0:00:00.952) 0:07:55.622 ********** 2026-03-13 00:34:51.006875 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:34:51.006886 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:34:51.006896 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:34:51.006907 | orchestrator | ok: [testbed-manager] 2026-03-13 00:34:51.006918 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:34:51.006929 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:34:51.006940 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:34:51.006950 | orchestrator | 2026-03-13 00:34:51.006961 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-13 00:34:51.006972 | orchestrator | Friday 13 March 2026 00:34:48 +0000 (0:00:00.813) 0:07:56.436 ********** 2026-03-13 00:34:51.006983 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:34:51.006993 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:34:51.007004 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:34:51.007015 | orchestrator | changed: [testbed-manager] 2026-03-13 00:34:51.007026 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:34:51.007036 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:34:51.007047 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:34:51.007057 | orchestrator | 2026-03-13 00:34:51.007068 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-03-13 00:34:51.007079 | orchestrator | Friday 13 March 2026 00:34:49 +0000 (0:00:01.120) 0:07:57.556 ********** 2026-03-13 00:34:51.007090 | orchestrator | included: osism.commons.state for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:34:51.007101 | orchestrator | 2026-03-13 00:34:51.007112 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-13 00:34:51.007123 | orchestrator | Friday 13 March 2026 00:34:50 +0000 (0:00:00.959) 0:07:58.516 ********** 2026-03-13 00:34:51.007134 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:34:51.007144 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:34:51.007155 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:34:51.007166 | orchestrator | ok: [testbed-manager] 2026-03-13 00:34:51.007176 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:34:51.007187 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:34:51.007198 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:34:51.007208 | orchestrator | 2026-03-13 00:34:51.007230 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-13 00:34:52.458575 | orchestrator | Friday 13 March 2026 00:34:50 +0000 (0:00:00.817) 0:07:59.333 ********** 2026-03-13 00:34:52.458735 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:34:52.458753 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:34:52.458765 | orchestrator | changed: [testbed-manager] 2026-03-13 00:34:52.458776 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:34:52.458787 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:34:52.458798 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:34:52.458809 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:34:52.458820 | orchestrator | 2026-03-13 00:34:52.458831 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 00:34:52.458843 | orchestrator | testbed-manager : ok=168  changed=41  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-03-13 00:34:52.458855 | orchestrator | testbed-node-0 : ok=177  changed=70  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-13 00:34:52.458866 | orchestrator | testbed-node-1 : ok=177  changed=70  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-13 00:34:52.458902 | orchestrator | testbed-node-2 : ok=177  changed=70  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-13 00:34:52.458914 | orchestrator | testbed-node-3 : ok=175  changed=66  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-03-13 00:34:52.458924 | orchestrator | testbed-node-4 : ok=175  changed=66  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-13 00:34:52.458935 | orchestrator | testbed-node-5 : ok=175  changed=66  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-13 00:34:52.458946 | orchestrator | 2026-03-13 00:34:52.458956 | orchestrator | 2026-03-13 00:34:52.458967 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 00:34:52.458978 | orchestrator | Friday 13 March 2026 00:34:52 +0000 (0:00:01.093) 0:08:00.426 ********** 2026-03-13 00:34:52.458989 | orchestrator | =============================================================================== 2026-03-13 00:34:52.459000 | orchestrator | osism.commons.packages : Install required packages --------------------- 76.97s 2026-03-13 00:34:52.459010 | orchestrator | osism.commons.packages : Download required packages -------------------- 34.40s 2026-03-13 00:34:52.459036 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 32.15s 2026-03-13 00:34:52.459048 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.70s 2026-03-13 00:34:52.459058 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.53s 2026-03-13 00:34:52.459069 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.42s 2026-03-13 00:34:52.459082 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.40s 2026-03-13 00:34:52.459094 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.93s 2026-03-13 00:34:52.459106 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.48s 2026-03-13 00:34:52.459118 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.24s 2026-03-13 00:34:52.459131 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 9.12s 2026-03-13 00:34:52.459142 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.56s 2026-03-13 00:34:52.459155 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.39s 2026-03-13 00:34:52.459167 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.37s 2026-03-13 00:34:52.459180 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 7.93s 2026-03-13 00:34:52.459192 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.88s 2026-03-13 00:34:52.459209 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.71s 2026-03-13 00:34:52.459230 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 7.12s 2026-03-13 00:34:52.459249 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.52s 2026-03-13 00:34:52.459269 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.32s 2026-03-13 00:34:52.750763 | orchestrator | + osism apply fail2ban 2026-03-13 00:35:05.309905 | orchestrator | 2026-03-13 00:35:05 | INFO  | Prepare task for execution of fail2ban. 2026-03-13 00:35:05.389233 | orchestrator | 2026-03-13 00:35:05 | INFO  | Task 5e87be16-8d94-47e1-b2f0-7d401870805f (fail2ban) was prepared for execution. 2026-03-13 00:35:05.389325 | orchestrator | 2026-03-13 00:35:05 | INFO  | It takes a moment until task 5e87be16-8d94-47e1-b2f0-7d401870805f (fail2ban) has been started and output is visible here. 2026-03-13 00:35:26.108230 | orchestrator | 2026-03-13 00:35:26.108359 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-03-13 00:35:26.108406 | orchestrator | 2026-03-13 00:35:26.108416 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-03-13 00:35:26.108425 | orchestrator | Friday 13 March 2026 00:35:09 +0000 (0:00:00.257) 0:00:00.257 ********** 2026-03-13 00:35:26.108435 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-13 00:35:26.108446 | orchestrator | 2026-03-13 00:35:26.108454 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-03-13 00:35:26.108462 | orchestrator | Friday 13 March 2026 00:35:10 +0000 (0:00:01.126) 0:00:01.384 ********** 2026-03-13 00:35:26.108470 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:35:26.108479 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:35:26.108487 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:35:26.108495 | orchestrator | changed: [testbed-manager] 2026-03-13 00:35:26.108503 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:35:26.108510 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:35:26.108518 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:35:26.108526 | orchestrator | 2026-03-13 00:35:26.108534 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-03-13 00:35:26.108548 | orchestrator | Friday 13 March 2026 00:35:21 +0000 (0:00:10.328) 0:00:11.712 ********** 2026-03-13 00:35:26.108561 | orchestrator | changed: [testbed-manager] 2026-03-13 00:35:26.108572 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:35:26.108645 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:35:26.108659 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:35:26.108673 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:35:26.108681 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:35:26.108689 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:35:26.108697 | orchestrator | 2026-03-13 00:35:26.108705 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-03-13 00:35:26.108717 | orchestrator | Friday 13 March 2026 00:35:22 +0000 (0:00:01.459) 0:00:13.172 ********** 2026-03-13 00:35:26.108745 | orchestrator | ok: [testbed-manager] 2026-03-13 00:35:26.108766 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:35:26.108780 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:35:26.108793 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:35:26.108807 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:35:26.108820 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:35:26.108834 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:35:26.108848 | orchestrator | 2026-03-13 00:35:26.108862 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-03-13 00:35:26.108903 | orchestrator | Friday 13 March 2026 00:35:24 +0000 (0:00:01.503) 0:00:14.675 ********** 2026-03-13 00:35:26.108917 | orchestrator | changed: [testbed-manager] 2026-03-13 00:35:26.108931 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:35:26.108941 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:35:26.108950 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:35:26.108967 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:35:26.108977 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:35:26.108986 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:35:26.108995 | orchestrator | 2026-03-13 00:35:26.109005 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 00:35:26.109030 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 00:35:26.109040 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 00:35:26.109048 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 00:35:26.109057 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 00:35:26.109084 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 00:35:26.109098 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 00:35:26.109111 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 00:35:26.109139 | orchestrator | 2026-03-13 00:35:26.109163 | orchestrator | 2026-03-13 00:35:26.109177 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 00:35:26.109191 | orchestrator | Friday 13 March 2026 00:35:25 +0000 (0:00:01.597) 0:00:16.273 ********** 2026-03-13 00:35:26.109204 | orchestrator | =============================================================================== 2026-03-13 00:35:26.109220 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 10.33s 2026-03-13 00:35:26.109234 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.60s 2026-03-13 00:35:26.109248 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.50s 2026-03-13 00:35:26.109262 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.46s 2026-03-13 00:35:26.109272 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.13s 2026-03-13 00:35:26.386011 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-03-13 00:35:26.386141 | orchestrator | + osism apply network 2026-03-13 00:35:38.419886 | orchestrator | 2026-03-13 00:35:38 | INFO  | Prepare task for execution of network. 2026-03-13 00:35:38.492266 | orchestrator | 2026-03-13 00:35:38 | INFO  | Task 066f0b42-db22-47ce-958e-20fa3778971f (network) was prepared for execution. 2026-03-13 00:35:38.492363 | orchestrator | 2026-03-13 00:35:38 | INFO  | It takes a moment until task 066f0b42-db22-47ce-958e-20fa3778971f (network) has been started and output is visible here. 2026-03-13 00:36:07.020998 | orchestrator | 2026-03-13 00:36:07.021115 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-03-13 00:36:07.021134 | orchestrator | 2026-03-13 00:36:07.021148 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-03-13 00:36:07.021161 | orchestrator | Friday 13 March 2026 00:35:42 +0000 (0:00:00.216) 0:00:00.216 ********** 2026-03-13 00:36:07.021173 | orchestrator | ok: [testbed-manager] 2026-03-13 00:36:07.021186 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:36:07.021198 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:36:07.021210 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:36:07.021222 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:36:07.021235 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:36:07.021247 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:36:07.021259 | orchestrator | 2026-03-13 00:36:07.021270 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-03-13 00:36:07.021282 | orchestrator | Friday 13 March 2026 00:35:43 +0000 (0:00:00.599) 0:00:00.816 ********** 2026-03-13 00:36:07.021295 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-13 00:36:07.021310 | orchestrator | 2026-03-13 00:36:07.021322 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-03-13 00:36:07.021334 | orchestrator | Friday 13 March 2026 00:35:44 +0000 (0:00:01.015) 0:00:01.831 ********** 2026-03-13 00:36:07.021347 | orchestrator | ok: [testbed-manager] 2026-03-13 00:36:07.021359 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:36:07.021372 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:36:07.021384 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:36:07.021395 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:36:07.021434 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:36:07.021447 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:36:07.021459 | orchestrator | 2026-03-13 00:36:07.021470 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-03-13 00:36:07.021481 | orchestrator | Friday 13 March 2026 00:35:46 +0000 (0:00:02.026) 0:00:03.858 ********** 2026-03-13 00:36:07.021493 | orchestrator | ok: [testbed-manager] 2026-03-13 00:36:07.021504 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:36:07.021516 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:36:07.021557 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:36:07.021570 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:36:07.021584 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:36:07.021598 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:36:07.021612 | orchestrator | 2026-03-13 00:36:07.021626 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-03-13 00:36:07.021638 | orchestrator | Friday 13 March 2026 00:35:47 +0000 (0:00:01.776) 0:00:05.634 ********** 2026-03-13 00:36:07.021652 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-03-13 00:36:07.021665 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-03-13 00:36:07.021679 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-03-13 00:36:07.021693 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-03-13 00:36:07.021704 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-03-13 00:36:07.021716 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-03-13 00:36:07.021729 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-03-13 00:36:07.021740 | orchestrator | 2026-03-13 00:36:07.021752 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-03-13 00:36:07.021764 | orchestrator | Friday 13 March 2026 00:35:48 +0000 (0:00:01.090) 0:00:06.724 ********** 2026-03-13 00:36:07.021774 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-13 00:36:07.021786 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-13 00:36:07.021796 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-13 00:36:07.021808 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-13 00:36:07.021820 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-13 00:36:07.021831 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-13 00:36:07.021843 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-13 00:36:07.021854 | orchestrator | 2026-03-13 00:36:07.021865 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-03-13 00:36:07.021876 | orchestrator | Friday 13 March 2026 00:35:52 +0000 (0:00:03.526) 0:00:10.251 ********** 2026-03-13 00:36:07.021888 | orchestrator | changed: [testbed-manager] 2026-03-13 00:36:07.021900 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:36:07.021912 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:36:07.021923 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:36:07.021947 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:36:07.021958 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:36:07.021971 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:36:07.021982 | orchestrator | 2026-03-13 00:36:07.021993 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-03-13 00:36:07.022005 | orchestrator | Friday 13 March 2026 00:35:54 +0000 (0:00:01.574) 0:00:11.826 ********** 2026-03-13 00:36:07.022076 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-13 00:36:07.022091 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-13 00:36:07.022102 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-13 00:36:07.022114 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-13 00:36:07.022126 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-13 00:36:07.022159 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-13 00:36:07.022173 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-13 00:36:07.022186 | orchestrator | 2026-03-13 00:36:07.022198 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-03-13 00:36:07.022211 | orchestrator | Friday 13 March 2026 00:35:56 +0000 (0:00:02.046) 0:00:13.872 ********** 2026-03-13 00:36:07.022237 | orchestrator | ok: [testbed-manager] 2026-03-13 00:36:07.022250 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:36:07.022262 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:36:07.022274 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:36:07.022287 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:36:07.022299 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:36:07.022312 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:36:07.022324 | orchestrator | 2026-03-13 00:36:07.022338 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-03-13 00:36:07.022376 | orchestrator | Friday 13 March 2026 00:35:57 +0000 (0:00:01.139) 0:00:15.012 ********** 2026-03-13 00:36:07.022391 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:36:07.022404 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:36:07.022418 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:36:07.022430 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:36:07.022443 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:36:07.022455 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:36:07.022468 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:36:07.022480 | orchestrator | 2026-03-13 00:36:07.022493 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-03-13 00:36:07.022506 | orchestrator | Friday 13 March 2026 00:35:57 +0000 (0:00:00.662) 0:00:15.674 ********** 2026-03-13 00:36:07.022518 | orchestrator | ok: [testbed-manager] 2026-03-13 00:36:07.022554 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:36:07.022566 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:36:07.022577 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:36:07.022588 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:36:07.022600 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:36:07.022611 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:36:07.022622 | orchestrator | 2026-03-13 00:36:07.022634 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-03-13 00:36:07.022645 | orchestrator | Friday 13 March 2026 00:36:00 +0000 (0:00:02.362) 0:00:18.037 ********** 2026-03-13 00:36:07.022656 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:36:07.022668 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:36:07.022679 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:36:07.022690 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:36:07.022702 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:36:07.022713 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:36:07.022726 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-03-13 00:36:07.022740 | orchestrator | 2026-03-13 00:36:07.022752 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-03-13 00:36:07.022764 | orchestrator | Friday 13 March 2026 00:36:01 +0000 (0:00:00.819) 0:00:18.857 ********** 2026-03-13 00:36:07.022776 | orchestrator | ok: [testbed-manager] 2026-03-13 00:36:07.022786 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:36:07.022797 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:36:07.022807 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:36:07.022818 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:36:07.022829 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:36:07.022841 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:36:07.022854 | orchestrator | 2026-03-13 00:36:07.022866 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-03-13 00:36:07.022877 | orchestrator | Friday 13 March 2026 00:36:02 +0000 (0:00:01.655) 0:00:20.512 ********** 2026-03-13 00:36:07.022899 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-13 00:36:07.022915 | orchestrator | 2026-03-13 00:36:07.022927 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-13 00:36:07.022949 | orchestrator | Friday 13 March 2026 00:36:04 +0000 (0:00:01.276) 0:00:21.788 ********** 2026-03-13 00:36:07.022961 | orchestrator | ok: [testbed-manager] 2026-03-13 00:36:07.022973 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:36:07.022984 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:36:07.022996 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:36:07.023007 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:36:07.023018 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:36:07.023030 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:36:07.023041 | orchestrator | 2026-03-13 00:36:07.023053 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-03-13 00:36:07.023064 | orchestrator | Friday 13 March 2026 00:36:05 +0000 (0:00:01.097) 0:00:22.885 ********** 2026-03-13 00:36:07.023076 | orchestrator | ok: [testbed-manager] 2026-03-13 00:36:07.023088 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:36:07.023100 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:36:07.023111 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:36:07.023122 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:36:07.023134 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:36:07.023146 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:36:07.023159 | orchestrator | 2026-03-13 00:36:07.023171 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-13 00:36:07.023183 | orchestrator | Friday 13 March 2026 00:36:05 +0000 (0:00:00.646) 0:00:23.532 ********** 2026-03-13 00:36:07.023196 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-03-13 00:36:07.023208 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-03-13 00:36:07.023221 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-03-13 00:36:07.023233 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-03-13 00:36:07.023245 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-13 00:36:07.023257 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-03-13 00:36:07.023269 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-03-13 00:36:07.023281 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-13 00:36:07.023293 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-13 00:36:07.023306 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-13 00:36:07.023318 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-03-13 00:36:07.023330 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-13 00:36:07.023342 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-13 00:36:07.023355 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-13 00:36:07.023367 | orchestrator | 2026-03-13 00:36:07.023394 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-03-13 00:36:22.450359 | orchestrator | Friday 13 March 2026 00:36:07 +0000 (0:00:01.223) 0:00:24.756 ********** 2026-03-13 00:36:22.450465 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:36:22.450481 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:36:22.450492 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:36:22.450503 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:36:22.450514 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:36:22.450525 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:36:22.450536 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:36:22.450548 | orchestrator | 2026-03-13 00:36:22.450608 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-03-13 00:36:22.450620 | orchestrator | Friday 13 March 2026 00:36:07 +0000 (0:00:00.613) 0:00:25.369 ********** 2026-03-13 00:36:22.450633 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-4, testbed-node-1, testbed-node-0, testbed-node-2, testbed-node-3, testbed-node-5 2026-03-13 00:36:22.450668 | orchestrator | 2026-03-13 00:36:22.450680 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-03-13 00:36:22.450691 | orchestrator | Friday 13 March 2026 00:36:12 +0000 (0:00:04.536) 0:00:29.905 ********** 2026-03-13 00:36:22.450704 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-03-13 00:36:22.450718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-03-13 00:36:22.450730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-03-13 00:36:22.450755 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-03-13 00:36:22.450767 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-03-13 00:36:22.450778 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-03-13 00:36:22.450790 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-03-13 00:36:22.450801 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-03-13 00:36:22.450812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-03-13 00:36:22.450830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-03-13 00:36:22.450842 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-03-13 00:36:22.450869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-03-13 00:36:22.450881 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-03-13 00:36:22.450901 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-03-13 00:36:22.450913 | orchestrator | 2026-03-13 00:36:22.450926 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-03-13 00:36:22.450939 | orchestrator | Friday 13 March 2026 00:36:17 +0000 (0:00:05.543) 0:00:35.448 ********** 2026-03-13 00:36:22.450953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-03-13 00:36:22.450966 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-03-13 00:36:22.450979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-03-13 00:36:22.450992 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-03-13 00:36:22.451010 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-03-13 00:36:22.451023 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-03-13 00:36:22.451036 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-03-13 00:36:22.451049 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-03-13 00:36:22.451061 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-03-13 00:36:22.451074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-03-13 00:36:22.451087 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-03-13 00:36:22.451100 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-03-13 00:36:22.451131 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-03-13 00:36:35.551161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-03-13 00:36:35.551265 | orchestrator | 2026-03-13 00:36:35.551278 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-03-13 00:36:35.551289 | orchestrator | Friday 13 March 2026 00:36:22 +0000 (0:00:04.977) 0:00:40.425 ********** 2026-03-13 00:36:35.551299 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-13 00:36:35.551307 | orchestrator | 2026-03-13 00:36:35.551316 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-13 00:36:35.551324 | orchestrator | Friday 13 March 2026 00:36:23 +0000 (0:00:01.247) 0:00:41.673 ********** 2026-03-13 00:36:35.551332 | orchestrator | ok: [testbed-manager] 2026-03-13 00:36:35.551342 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:36:35.551350 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:36:35.551358 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:36:35.551366 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:36:35.551374 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:36:35.551382 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:36:35.551390 | orchestrator | 2026-03-13 00:36:35.551398 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-13 00:36:35.551406 | orchestrator | Friday 13 March 2026 00:36:25 +0000 (0:00:01.144) 0:00:42.818 ********** 2026-03-13 00:36:35.551414 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-13 00:36:35.551423 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-13 00:36:35.551431 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-13 00:36:35.551439 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-13 00:36:35.551447 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-13 00:36:35.551455 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-13 00:36:35.551477 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-13 00:36:35.551485 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:36:35.551494 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-13 00:36:35.551502 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-13 00:36:35.551510 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-13 00:36:35.551518 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-13 00:36:35.551526 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-13 00:36:35.551534 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:36:35.551542 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-13 00:36:35.551550 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-13 00:36:35.551558 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-13 00:36:35.551565 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-13 00:36:35.551640 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:36:35.551651 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-13 00:36:35.551659 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-13 00:36:35.551667 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-13 00:36:35.551675 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-13 00:36:35.551683 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:36:35.551691 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-13 00:36:35.551699 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-13 00:36:35.551707 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-13 00:36:35.551717 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-13 00:36:35.551726 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:36:35.551735 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:36:35.551744 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-13 00:36:35.551753 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-13 00:36:35.551762 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-13 00:36:35.551771 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-13 00:36:35.551780 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:36:35.551789 | orchestrator | 2026-03-13 00:36:35.551798 | orchestrator | TASK [osism.commons.network : Include network extra init] ********************** 2026-03-13 00:36:35.551822 | orchestrator | Friday 13 March 2026 00:36:26 +0000 (0:00:00.943) 0:00:43.762 ********** 2026-03-13 00:36:35.551833 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/network-extra-init.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-13 00:36:35.551842 | orchestrator | 2026-03-13 00:36:35.551852 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init script] **************** 2026-03-13 00:36:35.551861 | orchestrator | Friday 13 March 2026 00:36:27 +0000 (0:00:01.209) 0:00:44.971 ********** 2026-03-13 00:36:35.551869 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:36:35.551878 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:36:35.551888 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:36:35.551896 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:36:35.551906 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:36:35.551915 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:36:35.551923 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:36:35.551932 | orchestrator | 2026-03-13 00:36:35.551941 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init systemd service] ******* 2026-03-13 00:36:35.551950 | orchestrator | Friday 13 March 2026 00:36:27 +0000 (0:00:00.605) 0:00:45.577 ********** 2026-03-13 00:36:35.551960 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:36:35.551969 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:36:35.551978 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:36:35.551987 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:36:35.551996 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:36:35.552005 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:36:35.552013 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:36:35.552022 | orchestrator | 2026-03-13 00:36:35.552031 | orchestrator | TASK [osism.commons.network : Enable and start network-extra-init service] ***** 2026-03-13 00:36:35.552040 | orchestrator | Friday 13 March 2026 00:36:28 +0000 (0:00:00.777) 0:00:46.355 ********** 2026-03-13 00:36:35.552049 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:36:35.552065 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:36:35.552073 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:36:35.552081 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:36:35.552088 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:36:35.552096 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:36:35.552104 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:36:35.552112 | orchestrator | 2026-03-13 00:36:35.552120 | orchestrator | TASK [osism.commons.network : Disable and stop network-extra-init service] ***** 2026-03-13 00:36:35.552128 | orchestrator | Friday 13 March 2026 00:36:29 +0000 (0:00:00.587) 0:00:46.942 ********** 2026-03-13 00:36:35.552136 | orchestrator | ok: [testbed-manager] 2026-03-13 00:36:35.552144 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:36:35.552156 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:36:35.552164 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:36:35.552172 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:36:35.552180 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:36:35.552188 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:36:35.552196 | orchestrator | 2026-03-13 00:36:35.552204 | orchestrator | TASK [osism.commons.network : Remove network-extra-init systemd service] ******* 2026-03-13 00:36:35.552212 | orchestrator | Friday 13 March 2026 00:36:31 +0000 (0:00:01.838) 0:00:48.781 ********** 2026-03-13 00:36:35.552220 | orchestrator | ok: [testbed-manager] 2026-03-13 00:36:35.552228 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:36:35.552236 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:36:35.552243 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:36:35.552251 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:36:35.552259 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:36:35.552267 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:36:35.552274 | orchestrator | 2026-03-13 00:36:35.552282 | orchestrator | TASK [osism.commons.network : Remove network-extra-init script] **************** 2026-03-13 00:36:35.552290 | orchestrator | Friday 13 March 2026 00:36:31 +0000 (0:00:00.952) 0:00:49.733 ********** 2026-03-13 00:36:35.552298 | orchestrator | ok: [testbed-manager] 2026-03-13 00:36:35.552306 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:36:35.552314 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:36:35.552322 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:36:35.552329 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:36:35.552337 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:36:35.552345 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:36:35.552352 | orchestrator | 2026-03-13 00:36:35.552360 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-03-13 00:36:35.552368 | orchestrator | Friday 13 March 2026 00:36:34 +0000 (0:00:02.213) 0:00:51.946 ********** 2026-03-13 00:36:35.552376 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:36:35.552384 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:36:35.552392 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:36:35.552400 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:36:35.552408 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:36:35.552415 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:36:35.552423 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:36:35.552431 | orchestrator | 2026-03-13 00:36:35.552439 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-03-13 00:36:35.552447 | orchestrator | Friday 13 March 2026 00:36:35 +0000 (0:00:00.812) 0:00:52.759 ********** 2026-03-13 00:36:35.552455 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:36:35.552463 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:36:35.552471 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:36:35.552479 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:36:35.552486 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:36:35.552494 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:36:35.552502 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:36:35.552510 | orchestrator | 2026-03-13 00:36:35.552518 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 00:36:35.552526 | orchestrator | testbed-manager : ok=25  changed=5  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-13 00:36:35.552541 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-13 00:36:35.552555 | orchestrator | testbed-node-1 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-13 00:36:35.866296 | orchestrator | testbed-node-2 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-13 00:36:35.866370 | orchestrator | testbed-node-3 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-13 00:36:35.866376 | orchestrator | testbed-node-4 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-13 00:36:35.866381 | orchestrator | testbed-node-5 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-13 00:36:35.866386 | orchestrator | 2026-03-13 00:36:35.866392 | orchestrator | 2026-03-13 00:36:35.866397 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 00:36:35.866402 | orchestrator | Friday 13 March 2026 00:36:35 +0000 (0:00:00.523) 0:00:53.283 ********** 2026-03-13 00:36:35.866407 | orchestrator | =============================================================================== 2026-03-13 00:36:35.866411 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.54s 2026-03-13 00:36:35.866416 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 4.98s 2026-03-13 00:36:35.866420 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.54s 2026-03-13 00:36:35.866425 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.53s 2026-03-13 00:36:35.866429 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.36s 2026-03-13 00:36:35.866434 | orchestrator | osism.commons.network : Remove network-extra-init script ---------------- 2.21s 2026-03-13 00:36:35.866438 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 2.05s 2026-03-13 00:36:35.866443 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.03s 2026-03-13 00:36:35.866447 | orchestrator | osism.commons.network : Disable and stop network-extra-init service ----- 1.84s 2026-03-13 00:36:35.866452 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.78s 2026-03-13 00:36:35.866457 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.66s 2026-03-13 00:36:35.866461 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.57s 2026-03-13 00:36:35.866466 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.28s 2026-03-13 00:36:35.866471 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.25s 2026-03-13 00:36:35.866475 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.22s 2026-03-13 00:36:35.866480 | orchestrator | osism.commons.network : Include network extra init ---------------------- 1.21s 2026-03-13 00:36:35.866484 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.15s 2026-03-13 00:36:35.866489 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.14s 2026-03-13 00:36:35.866493 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.10s 2026-03-13 00:36:35.866498 | orchestrator | osism.commons.network : Create required directories --------------------- 1.09s 2026-03-13 00:36:36.139776 | orchestrator | + osism apply wireguard 2026-03-13 00:36:48.100993 | orchestrator | 2026-03-13 00:36:48 | INFO  | Prepare task for execution of wireguard. 2026-03-13 00:36:48.167236 | orchestrator | 2026-03-13 00:36:48 | INFO  | Task 8ba5e107-da88-4219-a516-26bde1a2cab0 (wireguard) was prepared for execution. 2026-03-13 00:36:48.167359 | orchestrator | 2026-03-13 00:36:48 | INFO  | It takes a moment until task 8ba5e107-da88-4219-a516-26bde1a2cab0 (wireguard) has been started and output is visible here. 2026-03-13 00:37:04.835158 | orchestrator | 2026-03-13 00:37:04.835240 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-03-13 00:37:04.835248 | orchestrator | 2026-03-13 00:37:04.835253 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-03-13 00:37:04.835258 | orchestrator | Friday 13 March 2026 00:36:51 +0000 (0:00:00.162) 0:00:00.162 ********** 2026-03-13 00:37:04.835262 | orchestrator | ok: [testbed-manager] 2026-03-13 00:37:04.835267 | orchestrator | 2026-03-13 00:37:04.835271 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-03-13 00:37:04.835275 | orchestrator | Friday 13 March 2026 00:36:52 +0000 (0:00:01.149) 0:00:01.311 ********** 2026-03-13 00:37:04.835280 | orchestrator | changed: [testbed-manager] 2026-03-13 00:37:04.835285 | orchestrator | 2026-03-13 00:37:04.835289 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-03-13 00:37:04.835293 | orchestrator | Friday 13 March 2026 00:36:58 +0000 (0:00:05.116) 0:00:06.428 ********** 2026-03-13 00:37:04.835297 | orchestrator | changed: [testbed-manager] 2026-03-13 00:37:04.835301 | orchestrator | 2026-03-13 00:37:04.835305 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-03-13 00:37:04.835309 | orchestrator | Friday 13 March 2026 00:36:58 +0000 (0:00:00.501) 0:00:06.929 ********** 2026-03-13 00:37:04.835312 | orchestrator | changed: [testbed-manager] 2026-03-13 00:37:04.835316 | orchestrator | 2026-03-13 00:37:04.835320 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-03-13 00:37:04.835324 | orchestrator | Friday 13 March 2026 00:36:58 +0000 (0:00:00.381) 0:00:07.310 ********** 2026-03-13 00:37:04.835328 | orchestrator | ok: [testbed-manager] 2026-03-13 00:37:04.835331 | orchestrator | 2026-03-13 00:37:04.835335 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-03-13 00:37:04.835339 | orchestrator | Friday 13 March 2026 00:36:59 +0000 (0:00:00.557) 0:00:07.868 ********** 2026-03-13 00:37:04.835343 | orchestrator | ok: [testbed-manager] 2026-03-13 00:37:04.835347 | orchestrator | 2026-03-13 00:37:04.835351 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-03-13 00:37:04.835354 | orchestrator | Friday 13 March 2026 00:36:59 +0000 (0:00:00.392) 0:00:08.261 ********** 2026-03-13 00:37:04.835358 | orchestrator | ok: [testbed-manager] 2026-03-13 00:37:04.835362 | orchestrator | 2026-03-13 00:37:04.835366 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-03-13 00:37:04.835369 | orchestrator | Friday 13 March 2026 00:37:00 +0000 (0:00:00.381) 0:00:08.642 ********** 2026-03-13 00:37:04.835373 | orchestrator | changed: [testbed-manager] 2026-03-13 00:37:04.835377 | orchestrator | 2026-03-13 00:37:04.835381 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-03-13 00:37:04.835385 | orchestrator | Friday 13 March 2026 00:37:01 +0000 (0:00:01.031) 0:00:09.673 ********** 2026-03-13 00:37:04.835389 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-13 00:37:04.835393 | orchestrator | changed: [testbed-manager] 2026-03-13 00:37:04.835397 | orchestrator | 2026-03-13 00:37:04.835401 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-03-13 00:37:04.835405 | orchestrator | Friday 13 March 2026 00:37:02 +0000 (0:00:00.865) 0:00:10.539 ********** 2026-03-13 00:37:04.835409 | orchestrator | changed: [testbed-manager] 2026-03-13 00:37:04.835413 | orchestrator | 2026-03-13 00:37:04.835417 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-03-13 00:37:04.835421 | orchestrator | Friday 13 March 2026 00:37:03 +0000 (0:00:01.556) 0:00:12.095 ********** 2026-03-13 00:37:04.835425 | orchestrator | changed: [testbed-manager] 2026-03-13 00:37:04.835429 | orchestrator | 2026-03-13 00:37:04.835433 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 00:37:04.835468 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 00:37:04.835474 | orchestrator | 2026-03-13 00:37:04.835478 | orchestrator | 2026-03-13 00:37:04.835482 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 00:37:04.835486 | orchestrator | Friday 13 March 2026 00:37:04 +0000 (0:00:00.841) 0:00:12.936 ********** 2026-03-13 00:37:04.835490 | orchestrator | =============================================================================== 2026-03-13 00:37:04.835494 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 5.12s 2026-03-13 00:37:04.835501 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.56s 2026-03-13 00:37:04.835506 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.15s 2026-03-13 00:37:04.835510 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.03s 2026-03-13 00:37:04.835514 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.87s 2026-03-13 00:37:04.835518 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.84s 2026-03-13 00:37:04.835521 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.56s 2026-03-13 00:37:04.835525 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.50s 2026-03-13 00:37:04.835529 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.39s 2026-03-13 00:37:04.835533 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.38s 2026-03-13 00:37:04.835538 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.38s 2026-03-13 00:37:05.034125 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-03-13 00:37:05.064892 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-03-13 00:37:05.064982 | orchestrator | Dload Upload Total Spent Left Speed 2026-03-13 00:37:05.141866 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 181 0 --:--:-- --:--:-- --:--:-- 181 2026-03-13 00:37:05.152458 | orchestrator | + osism apply --environment custom workarounds 2026-03-13 00:37:06.898138 | orchestrator | 2026-03-13 00:37:06 | INFO  | Trying to run play workarounds in environment custom 2026-03-13 00:37:16.978556 | orchestrator | 2026-03-13 00:37:16 | INFO  | Prepare task for execution of workarounds. 2026-03-13 00:37:17.043396 | orchestrator | 2026-03-13 00:37:17 | INFO  | Task bceffe8a-65c4-40c7-9047-c0514e297b98 (workarounds) was prepared for execution. 2026-03-13 00:37:17.043497 | orchestrator | 2026-03-13 00:37:17 | INFO  | It takes a moment until task bceffe8a-65c4-40c7-9047-c0514e297b98 (workarounds) has been started and output is visible here. 2026-03-13 00:37:40.581999 | orchestrator | 2026-03-13 00:37:40.582163 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-13 00:37:40.582181 | orchestrator | 2026-03-13 00:37:40.582193 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-03-13 00:37:40.582205 | orchestrator | Friday 13 March 2026 00:37:20 +0000 (0:00:00.115) 0:00:00.115 ********** 2026-03-13 00:37:40.582217 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-03-13 00:37:40.582228 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-03-13 00:37:40.582239 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-03-13 00:37:40.582250 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-03-13 00:37:40.582261 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-03-13 00:37:40.582271 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-03-13 00:37:40.582282 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-03-13 00:37:40.582319 | orchestrator | 2026-03-13 00:37:40.582331 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-03-13 00:37:40.582342 | orchestrator | 2026-03-13 00:37:40.582353 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-13 00:37:40.582364 | orchestrator | Friday 13 March 2026 00:37:21 +0000 (0:00:00.584) 0:00:00.699 ********** 2026-03-13 00:37:40.582375 | orchestrator | ok: [testbed-manager] 2026-03-13 00:37:40.582387 | orchestrator | 2026-03-13 00:37:40.582398 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-03-13 00:37:40.582408 | orchestrator | 2026-03-13 00:37:40.582419 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-13 00:37:40.582430 | orchestrator | Friday 13 March 2026 00:37:23 +0000 (0:00:02.156) 0:00:02.855 ********** 2026-03-13 00:37:40.582441 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:37:40.582452 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:37:40.582462 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:37:40.582473 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:37:40.582484 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:37:40.582494 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:37:40.582505 | orchestrator | 2026-03-13 00:37:40.582516 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-03-13 00:37:40.582526 | orchestrator | 2026-03-13 00:37:40.582538 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-03-13 00:37:40.582551 | orchestrator | Friday 13 March 2026 00:37:25 +0000 (0:00:01.883) 0:00:04.738 ********** 2026-03-13 00:37:40.582564 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-13 00:37:40.582578 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-13 00:37:40.582591 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-13 00:37:40.582603 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-13 00:37:40.582615 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-13 00:37:40.582641 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-13 00:37:40.582655 | orchestrator | 2026-03-13 00:37:40.582667 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-03-13 00:37:40.582679 | orchestrator | Friday 13 March 2026 00:37:27 +0000 (0:00:01.584) 0:00:06.323 ********** 2026-03-13 00:37:40.582873 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:37:40.582901 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:37:40.582912 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:37:40.582923 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:37:40.582933 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:37:40.582944 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:37:40.582955 | orchestrator | 2026-03-13 00:37:40.582966 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-03-13 00:37:40.582977 | orchestrator | Friday 13 March 2026 00:37:30 +0000 (0:00:03.737) 0:00:10.060 ********** 2026-03-13 00:37:40.582988 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:37:40.582998 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:37:40.583009 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:37:40.583020 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:37:40.583030 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:37:40.583041 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:37:40.583052 | orchestrator | 2026-03-13 00:37:40.583062 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-03-13 00:37:40.583076 | orchestrator | 2026-03-13 00:37:40.583095 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-03-13 00:37:40.583139 | orchestrator | Friday 13 March 2026 00:37:31 +0000 (0:00:00.561) 0:00:10.621 ********** 2026-03-13 00:37:40.583159 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:37:40.583176 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:37:40.583196 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:37:40.583214 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:37:40.583232 | orchestrator | changed: [testbed-manager] 2026-03-13 00:37:40.583251 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:37:40.583265 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:37:40.583276 | orchestrator | 2026-03-13 00:37:40.583287 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-03-13 00:37:40.583298 | orchestrator | Friday 13 March 2026 00:37:32 +0000 (0:00:01.479) 0:00:12.101 ********** 2026-03-13 00:37:40.583309 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:37:40.583320 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:37:40.583330 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:37:40.583341 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:37:40.583352 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:37:40.583362 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:37:40.583394 | orchestrator | changed: [testbed-manager] 2026-03-13 00:37:40.583405 | orchestrator | 2026-03-13 00:37:40.583416 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-03-13 00:37:40.583427 | orchestrator | Friday 13 March 2026 00:37:34 +0000 (0:00:01.526) 0:00:13.627 ********** 2026-03-13 00:37:40.583438 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:37:40.583449 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:37:40.583460 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:37:40.583470 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:37:40.583481 | orchestrator | ok: [testbed-manager] 2026-03-13 00:37:40.583492 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:37:40.583502 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:37:40.583513 | orchestrator | 2026-03-13 00:37:40.583524 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-03-13 00:37:40.583535 | orchestrator | Friday 13 March 2026 00:37:35 +0000 (0:00:01.422) 0:00:15.050 ********** 2026-03-13 00:37:40.583545 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:37:40.583556 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:37:40.583567 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:37:40.583578 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:37:40.583589 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:37:40.583599 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:37:40.583610 | orchestrator | changed: [testbed-manager] 2026-03-13 00:37:40.583621 | orchestrator | 2026-03-13 00:37:40.583631 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-03-13 00:37:40.583642 | orchestrator | Friday 13 March 2026 00:37:37 +0000 (0:00:01.551) 0:00:16.602 ********** 2026-03-13 00:37:40.583653 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:37:40.583663 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:37:40.583674 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:37:40.583685 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:37:40.583725 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:37:40.583736 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:37:40.583747 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:37:40.583758 | orchestrator | 2026-03-13 00:37:40.583768 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-03-13 00:37:40.583779 | orchestrator | 2026-03-13 00:37:40.583790 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-03-13 00:37:40.583801 | orchestrator | Friday 13 March 2026 00:37:37 +0000 (0:00:00.551) 0:00:17.153 ********** 2026-03-13 00:37:40.583812 | orchestrator | ok: [testbed-manager] 2026-03-13 00:37:40.583823 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:37:40.583833 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:37:40.583844 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:37:40.583855 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:37:40.583866 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:37:40.583885 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:37:40.583896 | orchestrator | 2026-03-13 00:37:40.583907 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 00:37:40.583919 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-13 00:37:40.583932 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-13 00:37:40.583943 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-13 00:37:40.583963 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-13 00:37:40.583974 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-13 00:37:40.583985 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-13 00:37:40.583996 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-13 00:37:40.584007 | orchestrator | 2026-03-13 00:37:40.584017 | orchestrator | 2026-03-13 00:37:40.584028 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 00:37:40.584039 | orchestrator | Friday 13 March 2026 00:37:40 +0000 (0:00:02.693) 0:00:19.847 ********** 2026-03-13 00:37:40.584050 | orchestrator | =============================================================================== 2026-03-13 00:37:40.584061 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.74s 2026-03-13 00:37:40.584072 | orchestrator | Install python3-docker -------------------------------------------------- 2.69s 2026-03-13 00:37:40.584083 | orchestrator | Apply netplan configuration --------------------------------------------- 2.16s 2026-03-13 00:37:40.584094 | orchestrator | Apply netplan configuration --------------------------------------------- 1.88s 2026-03-13 00:37:40.584105 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.58s 2026-03-13 00:37:40.584115 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.55s 2026-03-13 00:37:40.584126 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.53s 2026-03-13 00:37:40.584137 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.48s 2026-03-13 00:37:40.584147 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.42s 2026-03-13 00:37:40.584158 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.58s 2026-03-13 00:37:40.584169 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.56s 2026-03-13 00:37:40.584188 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.55s 2026-03-13 00:37:40.951394 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-03-13 00:37:52.833918 | orchestrator | 2026-03-13 00:37:52 | INFO  | Prepare task for execution of reboot. 2026-03-13 00:37:52.897333 | orchestrator | 2026-03-13 00:37:52 | INFO  | Task ed9f675f-07cd-4247-aa29-31748c023a82 (reboot) was prepared for execution. 2026-03-13 00:37:52.897409 | orchestrator | 2026-03-13 00:37:52 | INFO  | It takes a moment until task ed9f675f-07cd-4247-aa29-31748c023a82 (reboot) has been started and output is visible here. 2026-03-13 00:38:02.129533 | orchestrator | 2026-03-13 00:38:02.129648 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-13 00:38:02.129667 | orchestrator | 2026-03-13 00:38:02.129680 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-13 00:38:02.129717 | orchestrator | Friday 13 March 2026 00:37:56 +0000 (0:00:00.150) 0:00:00.150 ********** 2026-03-13 00:38:02.129823 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:38:02.129836 | orchestrator | 2026-03-13 00:38:02.129847 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-13 00:38:02.129858 | orchestrator | Friday 13 March 2026 00:37:56 +0000 (0:00:00.084) 0:00:00.235 ********** 2026-03-13 00:38:02.129869 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:38:02.129880 | orchestrator | 2026-03-13 00:38:02.129891 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-13 00:38:02.129902 | orchestrator | Friday 13 March 2026 00:37:57 +0000 (0:00:00.886) 0:00:01.121 ********** 2026-03-13 00:38:02.129913 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:38:02.129924 | orchestrator | 2026-03-13 00:38:02.129935 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-13 00:38:02.129946 | orchestrator | 2026-03-13 00:38:02.129957 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-13 00:38:02.129967 | orchestrator | Friday 13 March 2026 00:37:57 +0000 (0:00:00.100) 0:00:01.221 ********** 2026-03-13 00:38:02.129978 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:38:02.129989 | orchestrator | 2026-03-13 00:38:02.130000 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-13 00:38:02.130011 | orchestrator | Friday 13 March 2026 00:37:57 +0000 (0:00:00.077) 0:00:01.299 ********** 2026-03-13 00:38:02.130081 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:38:02.130094 | orchestrator | 2026-03-13 00:38:02.130107 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-13 00:38:02.130119 | orchestrator | Friday 13 March 2026 00:37:58 +0000 (0:00:00.637) 0:00:01.936 ********** 2026-03-13 00:38:02.130132 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:38:02.130144 | orchestrator | 2026-03-13 00:38:02.130157 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-13 00:38:02.130170 | orchestrator | 2026-03-13 00:38:02.130182 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-13 00:38:02.130195 | orchestrator | Friday 13 March 2026 00:37:58 +0000 (0:00:00.096) 0:00:02.033 ********** 2026-03-13 00:38:02.130208 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:38:02.130221 | orchestrator | 2026-03-13 00:38:02.130233 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-13 00:38:02.130246 | orchestrator | Friday 13 March 2026 00:37:58 +0000 (0:00:00.164) 0:00:02.198 ********** 2026-03-13 00:38:02.130272 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:38:02.130286 | orchestrator | 2026-03-13 00:38:02.130298 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-13 00:38:02.130311 | orchestrator | Friday 13 March 2026 00:37:59 +0000 (0:00:00.652) 0:00:02.850 ********** 2026-03-13 00:38:02.130324 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:38:02.130336 | orchestrator | 2026-03-13 00:38:02.130349 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-13 00:38:02.130362 | orchestrator | 2026-03-13 00:38:02.130374 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-13 00:38:02.130387 | orchestrator | Friday 13 March 2026 00:37:59 +0000 (0:00:00.109) 0:00:02.959 ********** 2026-03-13 00:38:02.130399 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:38:02.130412 | orchestrator | 2026-03-13 00:38:02.130425 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-13 00:38:02.130436 | orchestrator | Friday 13 March 2026 00:37:59 +0000 (0:00:00.091) 0:00:03.051 ********** 2026-03-13 00:38:02.130447 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:38:02.130458 | orchestrator | 2026-03-13 00:38:02.130469 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-13 00:38:02.130480 | orchestrator | Friday 13 March 2026 00:38:00 +0000 (0:00:00.669) 0:00:03.720 ********** 2026-03-13 00:38:02.130491 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:38:02.130513 | orchestrator | 2026-03-13 00:38:02.130524 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-13 00:38:02.130535 | orchestrator | 2026-03-13 00:38:02.130547 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-13 00:38:02.130558 | orchestrator | Friday 13 March 2026 00:38:00 +0000 (0:00:00.101) 0:00:03.822 ********** 2026-03-13 00:38:02.130569 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:38:02.130580 | orchestrator | 2026-03-13 00:38:02.130591 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-13 00:38:02.130601 | orchestrator | Friday 13 March 2026 00:38:00 +0000 (0:00:00.091) 0:00:03.913 ********** 2026-03-13 00:38:02.130612 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:38:02.130623 | orchestrator | 2026-03-13 00:38:02.130634 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-13 00:38:02.130645 | orchestrator | Friday 13 March 2026 00:38:01 +0000 (0:00:00.673) 0:00:04.587 ********** 2026-03-13 00:38:02.130656 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:38:02.130667 | orchestrator | 2026-03-13 00:38:02.130678 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-13 00:38:02.130689 | orchestrator | 2026-03-13 00:38:02.130700 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-13 00:38:02.130711 | orchestrator | Friday 13 March 2026 00:38:01 +0000 (0:00:00.092) 0:00:04.680 ********** 2026-03-13 00:38:02.130738 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:38:02.130749 | orchestrator | 2026-03-13 00:38:02.130760 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-13 00:38:02.130771 | orchestrator | Friday 13 March 2026 00:38:01 +0000 (0:00:00.078) 0:00:04.759 ********** 2026-03-13 00:38:02.130782 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:38:02.130793 | orchestrator | 2026-03-13 00:38:02.130804 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-13 00:38:02.130815 | orchestrator | Friday 13 March 2026 00:38:01 +0000 (0:00:00.684) 0:00:05.443 ********** 2026-03-13 00:38:02.130847 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:38:02.130859 | orchestrator | 2026-03-13 00:38:02.130870 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 00:38:02.130886 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-13 00:38:02.130906 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-13 00:38:02.130924 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-13 00:38:02.130940 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-13 00:38:02.130957 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-13 00:38:02.130974 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-13 00:38:02.130990 | orchestrator | 2026-03-13 00:38:02.131020 | orchestrator | 2026-03-13 00:38:02.131053 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 00:38:02.131086 | orchestrator | Friday 13 March 2026 00:38:01 +0000 (0:00:00.028) 0:00:05.472 ********** 2026-03-13 00:38:02.131114 | orchestrator | =============================================================================== 2026-03-13 00:38:02.131139 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.20s 2026-03-13 00:38:02.131164 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.59s 2026-03-13 00:38:02.131207 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.53s 2026-03-13 00:38:02.329990 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-03-13 00:38:14.196797 | orchestrator | 2026-03-13 00:38:14 | INFO  | Prepare task for execution of wait-for-connection. 2026-03-13 00:38:14.260074 | orchestrator | 2026-03-13 00:38:14 | INFO  | Task 2f82c788-c933-4173-aee8-eb84a5ae43bc (wait-for-connection) was prepared for execution. 2026-03-13 00:38:14.260158 | orchestrator | 2026-03-13 00:38:14 | INFO  | It takes a moment until task 2f82c788-c933-4173-aee8-eb84a5ae43bc (wait-for-connection) has been started and output is visible here. 2026-03-13 00:38:29.912347 | orchestrator | 2026-03-13 00:38:29.912461 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-03-13 00:38:29.912478 | orchestrator | 2026-03-13 00:38:29.912490 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-03-13 00:38:29.912502 | orchestrator | Friday 13 March 2026 00:38:18 +0000 (0:00:00.226) 0:00:00.226 ********** 2026-03-13 00:38:29.912513 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:38:29.912525 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:38:29.912537 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:38:29.912548 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:38:29.912559 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:38:29.912570 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:38:29.912581 | orchestrator | 2026-03-13 00:38:29.912592 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 00:38:29.912603 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 00:38:29.912616 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 00:38:29.912627 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 00:38:29.912638 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 00:38:29.912648 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 00:38:29.912659 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 00:38:29.912670 | orchestrator | 2026-03-13 00:38:29.912681 | orchestrator | 2026-03-13 00:38:29.912692 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 00:38:29.912703 | orchestrator | Friday 13 March 2026 00:38:29 +0000 (0:00:11.478) 0:00:11.705 ********** 2026-03-13 00:38:29.912714 | orchestrator | =============================================================================== 2026-03-13 00:38:29.912725 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.48s 2026-03-13 00:38:30.112492 | orchestrator | + osism apply hddtemp 2026-03-13 00:38:41.907579 | orchestrator | 2026-03-13 00:38:41 | INFO  | Prepare task for execution of hddtemp. 2026-03-13 00:38:41.981985 | orchestrator | 2026-03-13 00:38:41 | INFO  | Task c6c69fd4-206c-4de6-8fd0-78d1821c95d9 (hddtemp) was prepared for execution. 2026-03-13 00:38:41.982172 | orchestrator | 2026-03-13 00:38:41 | INFO  | It takes a moment until task c6c69fd4-206c-4de6-8fd0-78d1821c95d9 (hddtemp) has been started and output is visible here. 2026-03-13 00:39:08.720477 | orchestrator | 2026-03-13 00:39:08.720549 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-03-13 00:39:08.720556 | orchestrator | 2026-03-13 00:39:08.720561 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-03-13 00:39:08.720565 | orchestrator | Friday 13 March 2026 00:38:46 +0000 (0:00:00.214) 0:00:00.214 ********** 2026-03-13 00:39:08.720583 | orchestrator | ok: [testbed-manager] 2026-03-13 00:39:08.720589 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:39:08.720593 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:39:08.720596 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:39:08.720600 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:39:08.720604 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:39:08.720608 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:39:08.720612 | orchestrator | 2026-03-13 00:39:08.720616 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-03-13 00:39:08.720620 | orchestrator | Friday 13 March 2026 00:38:46 +0000 (0:00:00.536) 0:00:00.750 ********** 2026-03-13 00:39:08.720625 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-13 00:39:08.720631 | orchestrator | 2026-03-13 00:39:08.720635 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-03-13 00:39:08.720639 | orchestrator | Friday 13 March 2026 00:38:47 +0000 (0:00:00.851) 0:00:01.601 ********** 2026-03-13 00:39:08.720643 | orchestrator | ok: [testbed-manager] 2026-03-13 00:39:08.720647 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:39:08.720650 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:39:08.720654 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:39:08.720658 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:39:08.720662 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:39:08.720665 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:39:08.720669 | orchestrator | 2026-03-13 00:39:08.720673 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-03-13 00:39:08.720677 | orchestrator | Friday 13 March 2026 00:38:49 +0000 (0:00:01.857) 0:00:03.459 ********** 2026-03-13 00:39:08.720681 | orchestrator | changed: [testbed-manager] 2026-03-13 00:39:08.720685 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:39:08.720689 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:39:08.720693 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:39:08.720697 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:39:08.720701 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:39:08.720704 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:39:08.720708 | orchestrator | 2026-03-13 00:39:08.720721 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-03-13 00:39:08.720725 | orchestrator | Friday 13 March 2026 00:38:50 +0000 (0:00:00.908) 0:00:04.368 ********** 2026-03-13 00:39:08.720729 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:39:08.720732 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:39:08.720736 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:39:08.720740 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:39:08.720744 | orchestrator | ok: [testbed-manager] 2026-03-13 00:39:08.720747 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:39:08.720751 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:39:08.720755 | orchestrator | 2026-03-13 00:39:08.720759 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-03-13 00:39:08.720762 | orchestrator | Friday 13 March 2026 00:38:51 +0000 (0:00:01.075) 0:00:05.443 ********** 2026-03-13 00:39:08.720766 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:39:08.720770 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:39:08.720774 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:39:08.720777 | orchestrator | changed: [testbed-manager] 2026-03-13 00:39:08.720781 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:39:08.720785 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:39:08.720789 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:39:08.720792 | orchestrator | 2026-03-13 00:39:08.720796 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-03-13 00:39:08.720800 | orchestrator | Friday 13 March 2026 00:38:52 +0000 (0:00:00.654) 0:00:06.098 ********** 2026-03-13 00:39:08.720804 | orchestrator | changed: [testbed-manager] 2026-03-13 00:39:08.720807 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:39:08.720875 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:39:08.720882 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:39:08.720886 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:39:08.720890 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:39:08.720893 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:39:08.720897 | orchestrator | 2026-03-13 00:39:08.720901 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-03-13 00:39:08.720905 | orchestrator | Friday 13 March 2026 00:39:05 +0000 (0:00:13.400) 0:00:19.499 ********** 2026-03-13 00:39:08.720909 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-13 00:39:08.720913 | orchestrator | 2026-03-13 00:39:08.720917 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-03-13 00:39:08.720920 | orchestrator | Friday 13 March 2026 00:39:06 +0000 (0:00:01.139) 0:00:20.639 ********** 2026-03-13 00:39:08.720924 | orchestrator | changed: [testbed-manager] 2026-03-13 00:39:08.720928 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:39:08.720932 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:39:08.720935 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:39:08.720939 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:39:08.720943 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:39:08.720946 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:39:08.720950 | orchestrator | 2026-03-13 00:39:08.720954 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 00:39:08.720958 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 00:39:08.720972 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-13 00:39:08.720977 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-13 00:39:08.720980 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-13 00:39:08.720984 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-13 00:39:08.720988 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-13 00:39:08.720992 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-13 00:39:08.720995 | orchestrator | 2026-03-13 00:39:08.720999 | orchestrator | 2026-03-13 00:39:08.721003 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 00:39:08.721007 | orchestrator | Friday 13 March 2026 00:39:08 +0000 (0:00:01.904) 0:00:22.543 ********** 2026-03-13 00:39:08.721011 | orchestrator | =============================================================================== 2026-03-13 00:39:08.721014 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.40s 2026-03-13 00:39:08.721018 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.90s 2026-03-13 00:39:08.721022 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.86s 2026-03-13 00:39:08.721026 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.14s 2026-03-13 00:39:08.721029 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.08s 2026-03-13 00:39:08.721033 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 0.91s 2026-03-13 00:39:08.721040 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 0.85s 2026-03-13 00:39:08.721047 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.65s 2026-03-13 00:39:08.721053 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.54s 2026-03-13 00:39:09.018175 | orchestrator | ++ semver latest 7.1.1 2026-03-13 00:39:09.067605 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-13 00:39:09.067708 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-13 00:39:09.067730 | orchestrator | + sudo systemctl restart manager.service 2026-03-13 00:39:22.721990 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-13 00:39:22.722149 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-13 00:39:22.722164 | orchestrator | + local max_attempts=60 2026-03-13 00:39:22.722175 | orchestrator | + local name=ceph-ansible 2026-03-13 00:39:22.722185 | orchestrator | + local attempt_num=1 2026-03-13 00:39:22.722195 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-13 00:39:22.757820 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-13 00:39:22.757992 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-13 00:39:22.758095 | orchestrator | + sleep 5 2026-03-13 00:39:27.765012 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-13 00:39:27.800901 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-13 00:39:27.800989 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-13 00:39:27.801004 | orchestrator | + sleep 5 2026-03-13 00:39:32.803788 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-13 00:39:32.835591 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-13 00:39:32.835706 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-13 00:39:32.835722 | orchestrator | + sleep 5 2026-03-13 00:39:37.838729 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-13 00:39:37.871249 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-13 00:39:37.871340 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-13 00:39:37.871353 | orchestrator | + sleep 5 2026-03-13 00:39:42.874407 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-13 00:39:42.911569 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-13 00:39:42.911689 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-13 00:39:42.911705 | orchestrator | + sleep 5 2026-03-13 00:39:47.915354 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-13 00:39:47.944194 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-13 00:39:47.944289 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-13 00:39:47.944304 | orchestrator | + sleep 5 2026-03-13 00:39:52.948454 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-13 00:39:52.981196 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-13 00:39:52.981327 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-13 00:39:52.981343 | orchestrator | + sleep 5 2026-03-13 00:39:57.986350 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-13 00:39:58.016432 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-13 00:39:58.016560 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-13 00:39:58.016588 | orchestrator | + sleep 5 2026-03-13 00:40:03.018810 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-13 00:40:03.055900 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-13 00:40:03.055972 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-13 00:40:03.055982 | orchestrator | + sleep 5 2026-03-13 00:40:08.059382 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-13 00:40:08.093345 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-13 00:40:08.093441 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-13 00:40:08.093454 | orchestrator | + sleep 5 2026-03-13 00:40:13.096423 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-13 00:40:13.129560 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-13 00:40:13.129658 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-13 00:40:13.129684 | orchestrator | + sleep 5 2026-03-13 00:40:18.134419 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-13 00:40:18.168594 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-13 00:40:18.168691 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-13 00:40:18.168733 | orchestrator | + sleep 5 2026-03-13 00:40:23.172786 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-13 00:40:23.210379 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-13 00:40:23.210473 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-13 00:40:23.210487 | orchestrator | + sleep 5 2026-03-13 00:40:28.215350 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-13 00:40:28.256980 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-13 00:40:28.257078 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-13 00:40:28.257093 | orchestrator | + local max_attempts=60 2026-03-13 00:40:28.257106 | orchestrator | + local name=kolla-ansible 2026-03-13 00:40:28.257117 | orchestrator | + local attempt_num=1 2026-03-13 00:40:28.257128 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-13 00:40:28.296491 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-13 00:40:28.296600 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-13 00:40:28.296617 | orchestrator | + local max_attempts=60 2026-03-13 00:40:28.296629 | orchestrator | + local name=osism-ansible 2026-03-13 00:40:28.296640 | orchestrator | + local attempt_num=1 2026-03-13 00:40:28.296950 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-13 00:40:28.334092 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-13 00:40:28.334184 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-13 00:40:28.334199 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-13 00:40:28.478885 | orchestrator | ARA in ceph-ansible already disabled. 2026-03-13 00:40:28.613583 | orchestrator | ARA in kolla-ansible already disabled. 2026-03-13 00:40:28.758154 | orchestrator | ARA in osism-ansible already disabled. 2026-03-13 00:40:28.870007 | orchestrator | ARA in osism-kubernetes already disabled. 2026-03-13 00:40:28.870273 | orchestrator | + osism apply gather-facts 2026-03-13 00:40:40.728344 | orchestrator | 2026-03-13 00:40:40 | INFO  | Prepare task for execution of gather-facts. 2026-03-13 00:40:40.791168 | orchestrator | 2026-03-13 00:40:40 | INFO  | Task 32a5042d-8ce5-435a-85e3-c48bdb946ae3 (gather-facts) was prepared for execution. 2026-03-13 00:40:40.791291 | orchestrator | 2026-03-13 00:40:40 | INFO  | It takes a moment until task 32a5042d-8ce5-435a-85e3-c48bdb946ae3 (gather-facts) has been started and output is visible here. 2026-03-13 00:40:54.177069 | orchestrator | 2026-03-13 00:40:54.177166 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-13 00:40:54.177178 | orchestrator | 2026-03-13 00:40:54.177187 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-13 00:40:54.177196 | orchestrator | Friday 13 March 2026 00:40:44 +0000 (0:00:00.167) 0:00:00.167 ********** 2026-03-13 00:40:54.177204 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:40:54.177213 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:40:54.177221 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:40:54.177229 | orchestrator | ok: [testbed-manager] 2026-03-13 00:40:54.177237 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:40:54.177245 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:40:54.177253 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:40:54.177261 | orchestrator | 2026-03-13 00:40:54.177269 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-13 00:40:54.177277 | orchestrator | 2026-03-13 00:40:54.177285 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-13 00:40:54.177293 | orchestrator | Friday 13 March 2026 00:40:53 +0000 (0:00:08.993) 0:00:09.161 ********** 2026-03-13 00:40:54.177301 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:40:54.177310 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:40:54.177317 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:40:54.177325 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:40:54.177333 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:40:54.177341 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:40:54.177349 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:40:54.177356 | orchestrator | 2026-03-13 00:40:54.177364 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 00:40:54.177394 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-13 00:40:54.177404 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-13 00:40:54.177412 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-13 00:40:54.177436 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-13 00:40:54.177444 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-13 00:40:54.177452 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-13 00:40:54.177460 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-13 00:40:54.177468 | orchestrator | 2026-03-13 00:40:54.177476 | orchestrator | 2026-03-13 00:40:54.177484 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 00:40:54.177492 | orchestrator | Friday 13 March 2026 00:40:53 +0000 (0:00:00.461) 0:00:09.623 ********** 2026-03-13 00:40:54.177500 | orchestrator | =============================================================================== 2026-03-13 00:40:54.177508 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.99s 2026-03-13 00:40:54.177516 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.46s 2026-03-13 00:40:54.379395 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-03-13 00:40:54.388193 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-03-13 00:40:54.395435 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-03-13 00:40:54.403638 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-03-13 00:40:54.410275 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-03-13 00:40:54.418095 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-03-13 00:40:54.428239 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-03-13 00:40:54.435386 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-03-13 00:40:54.445450 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-03-13 00:40:54.453869 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-03-13 00:40:54.461154 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-03-13 00:40:54.468845 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-03-13 00:40:54.476227 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-03-13 00:40:54.483829 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-03-13 00:40:54.491054 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-03-13 00:40:54.498295 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-03-13 00:40:54.505574 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-03-13 00:40:54.512897 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-03-13 00:40:54.519898 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-03-13 00:40:54.527501 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-03-13 00:40:54.535397 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-03-13 00:40:54.542766 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-03-13 00:40:54.550215 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-03-13 00:40:54.558240 | orchestrator | + [[ false == \t\r\u\e ]] 2026-03-13 00:40:54.662042 | orchestrator | ok: Runtime: 0:23:30.299870 2026-03-13 00:40:54.750774 | 2026-03-13 00:40:54.750943 | TASK [Deploy services] 2026-03-13 00:40:55.283955 | orchestrator | skipping: Conditional result was False 2026-03-13 00:40:55.304397 | 2026-03-13 00:40:55.304601 | TASK [Deploy in a nutshell] 2026-03-13 00:40:55.996817 | orchestrator | + set -e 2026-03-13 00:40:55.998398 | orchestrator | 2026-03-13 00:40:55.998430 | orchestrator | # PULL IMAGES 2026-03-13 00:40:55.998436 | orchestrator | 2026-03-13 00:40:55.998445 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-13 00:40:55.998454 | orchestrator | ++ export INTERACTIVE=false 2026-03-13 00:40:55.998473 | orchestrator | ++ INTERACTIVE=false 2026-03-13 00:40:55.998495 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-13 00:40:55.998505 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-13 00:40:55.998511 | orchestrator | + source /opt/manager-vars.sh 2026-03-13 00:40:55.998515 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-13 00:40:55.998523 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-13 00:40:55.998527 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-13 00:40:55.998534 | orchestrator | ++ CEPH_VERSION=reef 2026-03-13 00:40:55.998538 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-13 00:40:55.998545 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-13 00:40:55.998548 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-13 00:40:55.998555 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-13 00:40:55.998559 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-13 00:40:55.998564 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-13 00:40:55.998567 | orchestrator | ++ export ARA=false 2026-03-13 00:40:55.998571 | orchestrator | ++ ARA=false 2026-03-13 00:40:55.998575 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-13 00:40:55.998579 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-13 00:40:55.998583 | orchestrator | ++ export TEMPEST=true 2026-03-13 00:40:55.998586 | orchestrator | ++ TEMPEST=true 2026-03-13 00:40:55.998590 | orchestrator | ++ export IS_ZUUL=true 2026-03-13 00:40:55.998594 | orchestrator | ++ IS_ZUUL=true 2026-03-13 00:40:55.998598 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.64 2026-03-13 00:40:55.998602 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.64 2026-03-13 00:40:55.998605 | orchestrator | ++ export EXTERNAL_API=false 2026-03-13 00:40:55.998609 | orchestrator | ++ EXTERNAL_API=false 2026-03-13 00:40:55.998613 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-13 00:40:55.998617 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-13 00:40:55.998620 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-13 00:40:55.998624 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-13 00:40:55.998628 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-13 00:40:55.998632 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-13 00:40:55.998636 | orchestrator | + echo 2026-03-13 00:40:55.998639 | orchestrator | + echo '# PULL IMAGES' 2026-03-13 00:40:55.998643 | orchestrator | + echo 2026-03-13 00:40:55.998738 | orchestrator | ++ semver latest 7.0.0 2026-03-13 00:40:56.046457 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-13 00:40:56.046547 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-13 00:40:56.046556 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-03-13 00:40:57.903113 | orchestrator | 2026-03-13 00:40:57 | INFO  | Trying to run play pull-images in environment custom 2026-03-13 00:41:07.939867 | orchestrator | 2026-03-13 00:41:07 | INFO  | Prepare task for execution of pull-images. 2026-03-13 00:41:08.010482 | orchestrator | 2026-03-13 00:41:08 | INFO  | Task a8e5b721-613e-472a-9b99-e94f473e019e (pull-images) was prepared for execution. 2026-03-13 00:41:08.010597 | orchestrator | 2026-03-13 00:41:08 | INFO  | Task a8e5b721-613e-472a-9b99-e94f473e019e is running in background. No more output. Check ARA for logs. 2026-03-13 00:41:10.085103 | orchestrator | 2026-03-13 00:41:10 | INFO  | Trying to run play wipe-partitions in environment custom 2026-03-13 00:41:20.159781 | orchestrator | 2026-03-13 00:41:20 | INFO  | Prepare task for execution of wipe-partitions. 2026-03-13 00:41:20.238624 | orchestrator | 2026-03-13 00:41:20 | INFO  | Task ccaa42cb-d770-47c9-bb6f-7e78beca1e98 (wipe-partitions) was prepared for execution. 2026-03-13 00:41:20.238727 | orchestrator | 2026-03-13 00:41:20 | INFO  | It takes a moment until task ccaa42cb-d770-47c9-bb6f-7e78beca1e98 (wipe-partitions) has been started and output is visible here. 2026-03-13 00:41:33.102709 | orchestrator | 2026-03-13 00:41:33.102839 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-03-13 00:41:33.102865 | orchestrator | 2026-03-13 00:41:33.102878 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-03-13 00:41:33.102894 | orchestrator | Friday 13 March 2026 00:41:24 +0000 (0:00:00.095) 0:00:00.095 ********** 2026-03-13 00:41:33.102929 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:41:33.102942 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:41:33.102953 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:41:33.102964 | orchestrator | 2026-03-13 00:41:33.103045 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-03-13 00:41:33.103064 | orchestrator | Friday 13 March 2026 00:41:25 +0000 (0:00:00.522) 0:00:00.617 ********** 2026-03-13 00:41:33.103090 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:41:33.103111 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:41:33.103131 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:41:33.103151 | orchestrator | 2026-03-13 00:41:33.103167 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-03-13 00:41:33.103178 | orchestrator | Friday 13 March 2026 00:41:25 +0000 (0:00:00.296) 0:00:00.914 ********** 2026-03-13 00:41:33.103189 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:41:33.103201 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:41:33.103211 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:41:33.103222 | orchestrator | 2026-03-13 00:41:33.103235 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-03-13 00:41:33.103247 | orchestrator | Friday 13 March 2026 00:41:26 +0000 (0:00:00.581) 0:00:01.496 ********** 2026-03-13 00:41:33.103259 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:41:33.103271 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:41:33.103284 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:41:33.103296 | orchestrator | 2026-03-13 00:41:33.103309 | orchestrator | TASK [Check device availability] *********************************************** 2026-03-13 00:41:33.103321 | orchestrator | Friday 13 March 2026 00:41:26 +0000 (0:00:00.235) 0:00:01.731 ********** 2026-03-13 00:41:33.103334 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-13 00:41:33.103352 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-13 00:41:33.103364 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-13 00:41:33.103377 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-13 00:41:33.103389 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-13 00:41:33.103401 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-13 00:41:33.103413 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-13 00:41:33.103426 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-13 00:41:33.103438 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-13 00:41:33.103451 | orchestrator | 2026-03-13 00:41:33.103464 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-03-13 00:41:33.103477 | orchestrator | Friday 13 March 2026 00:41:27 +0000 (0:00:01.193) 0:00:02.925 ********** 2026-03-13 00:41:33.103489 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-03-13 00:41:33.103502 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-03-13 00:41:33.103514 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-03-13 00:41:33.103526 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-03-13 00:41:33.103539 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-03-13 00:41:33.103551 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-03-13 00:41:33.103563 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-03-13 00:41:33.103576 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-03-13 00:41:33.103589 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-03-13 00:41:33.103601 | orchestrator | 2026-03-13 00:41:33.103614 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-03-13 00:41:33.103626 | orchestrator | Friday 13 March 2026 00:41:29 +0000 (0:00:01.458) 0:00:04.384 ********** 2026-03-13 00:41:33.103636 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-13 00:41:33.103647 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-13 00:41:33.103657 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-13 00:41:33.103675 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-13 00:41:33.103695 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-13 00:41:33.103706 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-13 00:41:33.103717 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-13 00:41:33.103727 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-13 00:41:33.103751 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-13 00:41:33.103762 | orchestrator | 2026-03-13 00:41:33.103772 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-03-13 00:41:33.103783 | orchestrator | Friday 13 March 2026 00:41:31 +0000 (0:00:02.328) 0:00:06.713 ********** 2026-03-13 00:41:33.103794 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:41:33.103805 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:41:33.103815 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:41:33.103826 | orchestrator | 2026-03-13 00:41:33.103836 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-03-13 00:41:33.103847 | orchestrator | Friday 13 March 2026 00:41:32 +0000 (0:00:00.583) 0:00:07.297 ********** 2026-03-13 00:41:33.103858 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:41:33.103869 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:41:33.103879 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:41:33.103891 | orchestrator | 2026-03-13 00:41:33.103902 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 00:41:33.103914 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-13 00:41:33.103926 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-13 00:41:33.103955 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-13 00:41:33.104012 | orchestrator | 2026-03-13 00:41:33.104026 | orchestrator | 2026-03-13 00:41:33.104038 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 00:41:33.104049 | orchestrator | Friday 13 March 2026 00:41:32 +0000 (0:00:00.598) 0:00:07.895 ********** 2026-03-13 00:41:33.104060 | orchestrator | =============================================================================== 2026-03-13 00:41:33.104074 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.33s 2026-03-13 00:41:33.104093 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.46s 2026-03-13 00:41:33.104112 | orchestrator | Check device availability ----------------------------------------------- 1.19s 2026-03-13 00:41:33.104132 | orchestrator | Request device events from the kernel ----------------------------------- 0.60s 2026-03-13 00:41:33.104152 | orchestrator | Reload udev rules ------------------------------------------------------- 0.58s 2026-03-13 00:41:33.104172 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.58s 2026-03-13 00:41:33.104191 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.52s 2026-03-13 00:41:33.104209 | orchestrator | Remove all rook related logical devices --------------------------------- 0.30s 2026-03-13 00:41:33.104226 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.24s 2026-03-13 00:41:45.529768 | orchestrator | 2026-03-13 00:41:45 | INFO  | Prepare task for execution of facts. 2026-03-13 00:41:45.593529 | orchestrator | 2026-03-13 00:41:45 | INFO  | Task 05db343f-f6f6-44d8-985d-1c7b80256d6e (facts) was prepared for execution. 2026-03-13 00:41:45.593621 | orchestrator | 2026-03-13 00:41:45 | INFO  | It takes a moment until task 05db343f-f6f6-44d8-985d-1c7b80256d6e (facts) has been started and output is visible here. 2026-03-13 00:41:58.225285 | orchestrator | 2026-03-13 00:41:58.225402 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-13 00:41:58.225420 | orchestrator | 2026-03-13 00:41:58.225456 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-13 00:41:58.225468 | orchestrator | Friday 13 March 2026 00:41:49 +0000 (0:00:00.263) 0:00:00.263 ********** 2026-03-13 00:41:58.225479 | orchestrator | ok: [testbed-manager] 2026-03-13 00:41:58.225491 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:41:58.225502 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:41:58.225512 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:41:58.225523 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:41:58.225533 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:41:58.225544 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:41:58.225555 | orchestrator | 2026-03-13 00:41:58.225583 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-13 00:41:58.225595 | orchestrator | Friday 13 March 2026 00:41:50 +0000 (0:00:01.077) 0:00:01.340 ********** 2026-03-13 00:41:58.225606 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:41:58.225617 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:41:58.225628 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:41:58.225638 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:41:58.225649 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:41:58.225660 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:41:58.225671 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:41:58.225681 | orchestrator | 2026-03-13 00:41:58.225692 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-13 00:41:58.225703 | orchestrator | 2026-03-13 00:41:58.225714 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-13 00:41:58.225725 | orchestrator | Friday 13 March 2026 00:41:51 +0000 (0:00:01.197) 0:00:02.537 ********** 2026-03-13 00:41:58.225736 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:41:58.225747 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:41:58.225758 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:41:58.225768 | orchestrator | ok: [testbed-manager] 2026-03-13 00:41:58.225779 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:41:58.225790 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:41:58.225800 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:41:58.225811 | orchestrator | 2026-03-13 00:41:58.225822 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-13 00:41:58.225834 | orchestrator | 2026-03-13 00:41:58.225846 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-13 00:41:58.225859 | orchestrator | Friday 13 March 2026 00:41:57 +0000 (0:00:05.698) 0:00:08.235 ********** 2026-03-13 00:41:58.225872 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:41:58.225884 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:41:58.225896 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:41:58.225908 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:41:58.225920 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:41:58.225933 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:41:58.225945 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:41:58.225956 | orchestrator | 2026-03-13 00:41:58.225967 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 00:41:58.225978 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-13 00:41:58.225990 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-13 00:41:58.226090 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-13 00:41:58.226101 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-13 00:41:58.226112 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-13 00:41:58.226133 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-13 00:41:58.226144 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-13 00:41:58.226155 | orchestrator | 2026-03-13 00:41:58.226166 | orchestrator | 2026-03-13 00:41:58.226176 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 00:41:58.226187 | orchestrator | Friday 13 March 2026 00:41:58 +0000 (0:00:00.447) 0:00:08.683 ********** 2026-03-13 00:41:58.226198 | orchestrator | =============================================================================== 2026-03-13 00:41:58.226209 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.70s 2026-03-13 00:41:58.226220 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.20s 2026-03-13 00:41:58.226230 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.08s 2026-03-13 00:41:58.226241 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.45s 2026-03-13 00:42:00.305769 | orchestrator | 2026-03-13 00:42:00 | INFO  | Prepare task for execution of ceph-configure-lvm-volumes. 2026-03-13 00:42:00.368352 | orchestrator | 2026-03-13 00:42:00 | INFO  | Task 922ff925-0bac-48a2-bc0f-aeaa28fe89dd (ceph-configure-lvm-volumes) was prepared for execution. 2026-03-13 00:42:00.368448 | orchestrator | 2026-03-13 00:42:00 | INFO  | It takes a moment until task 922ff925-0bac-48a2-bc0f-aeaa28fe89dd (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-03-13 00:42:11.295780 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-13 00:42:11.295888 | orchestrator | 2.16.14 2026-03-13 00:42:11.295902 | orchestrator | 2026-03-13 00:42:11.295922 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-13 00:42:11.295933 | orchestrator | 2026-03-13 00:42:11.295943 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-13 00:42:11.295952 | orchestrator | Friday 13 March 2026 00:42:04 +0000 (0:00:00.309) 0:00:00.309 ********** 2026-03-13 00:42:11.295961 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-13 00:42:11.295970 | orchestrator | 2026-03-13 00:42:11.295979 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-13 00:42:11.295988 | orchestrator | Friday 13 March 2026 00:42:05 +0000 (0:00:00.239) 0:00:00.549 ********** 2026-03-13 00:42:11.295997 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:42:11.296030 | orchestrator | 2026-03-13 00:42:11.296040 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:42:11.296048 | orchestrator | Friday 13 March 2026 00:42:05 +0000 (0:00:00.214) 0:00:00.763 ********** 2026-03-13 00:42:11.296057 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-13 00:42:11.296066 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-13 00:42:11.296075 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-13 00:42:11.296083 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-13 00:42:11.296092 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-13 00:42:11.296101 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-13 00:42:11.296109 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-13 00:42:11.296118 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-13 00:42:11.296127 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-13 00:42:11.296135 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-13 00:42:11.296164 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-13 00:42:11.296173 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-13 00:42:11.296182 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-13 00:42:11.296190 | orchestrator | 2026-03-13 00:42:11.296199 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:42:11.296208 | orchestrator | Friday 13 March 2026 00:42:05 +0000 (0:00:00.462) 0:00:01.225 ********** 2026-03-13 00:42:11.296216 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:42:11.296225 | orchestrator | 2026-03-13 00:42:11.296233 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:42:11.296242 | orchestrator | Friday 13 March 2026 00:42:05 +0000 (0:00:00.210) 0:00:01.435 ********** 2026-03-13 00:42:11.296250 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:42:11.296259 | orchestrator | 2026-03-13 00:42:11.296268 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:42:11.296280 | orchestrator | Friday 13 March 2026 00:42:06 +0000 (0:00:00.196) 0:00:01.631 ********** 2026-03-13 00:42:11.296289 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:42:11.296298 | orchestrator | 2026-03-13 00:42:11.296309 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:42:11.296319 | orchestrator | Friday 13 March 2026 00:42:06 +0000 (0:00:00.201) 0:00:01.833 ********** 2026-03-13 00:42:11.296330 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:42:11.296339 | orchestrator | 2026-03-13 00:42:11.296349 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:42:11.296359 | orchestrator | Friday 13 March 2026 00:42:06 +0000 (0:00:00.201) 0:00:02.035 ********** 2026-03-13 00:42:11.296369 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:42:11.296379 | orchestrator | 2026-03-13 00:42:11.296389 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:42:11.296399 | orchestrator | Friday 13 March 2026 00:42:06 +0000 (0:00:00.185) 0:00:02.220 ********** 2026-03-13 00:42:11.296410 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:42:11.296420 | orchestrator | 2026-03-13 00:42:11.296429 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:42:11.296439 | orchestrator | Friday 13 March 2026 00:42:06 +0000 (0:00:00.177) 0:00:02.397 ********** 2026-03-13 00:42:11.296449 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:42:11.296459 | orchestrator | 2026-03-13 00:42:11.296469 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:42:11.296479 | orchestrator | Friday 13 March 2026 00:42:07 +0000 (0:00:00.186) 0:00:02.584 ********** 2026-03-13 00:42:11.296489 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:42:11.296499 | orchestrator | 2026-03-13 00:42:11.296509 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:42:11.296519 | orchestrator | Friday 13 March 2026 00:42:07 +0000 (0:00:00.178) 0:00:02.762 ********** 2026-03-13 00:42:11.296529 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_97f75f20-579f-4518-b7ce-4d90969f977d) 2026-03-13 00:42:11.296540 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_97f75f20-579f-4518-b7ce-4d90969f977d) 2026-03-13 00:42:11.296550 | orchestrator | 2026-03-13 00:42:11.296560 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:42:11.296584 | orchestrator | Friday 13 March 2026 00:42:07 +0000 (0:00:00.367) 0:00:03.130 ********** 2026-03-13 00:42:11.296595 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_0e74aa03-0dd7-4a4d-94fb-6534b2ad29b5) 2026-03-13 00:42:11.296605 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_0e74aa03-0dd7-4a4d-94fb-6534b2ad29b5) 2026-03-13 00:42:11.296614 | orchestrator | 2026-03-13 00:42:11.296624 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:42:11.296641 | orchestrator | Friday 13 March 2026 00:42:08 +0000 (0:00:00.516) 0:00:03.646 ********** 2026-03-13 00:42:11.296652 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e5b6d572-8591-43aa-97f9-3b718c2d248a) 2026-03-13 00:42:11.296661 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e5b6d572-8591-43aa-97f9-3b718c2d248a) 2026-03-13 00:42:11.296669 | orchestrator | 2026-03-13 00:42:11.296678 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:42:11.296686 | orchestrator | Friday 13 March 2026 00:42:08 +0000 (0:00:00.504) 0:00:04.150 ********** 2026-03-13 00:42:11.296695 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b47ce045-806c-4f33-b887-31de2316680c) 2026-03-13 00:42:11.296703 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b47ce045-806c-4f33-b887-31de2316680c) 2026-03-13 00:42:11.296712 | orchestrator | 2026-03-13 00:42:11.296720 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:42:11.296729 | orchestrator | Friday 13 March 2026 00:42:09 +0000 (0:00:00.636) 0:00:04.787 ********** 2026-03-13 00:42:11.296737 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-13 00:42:11.296745 | orchestrator | 2026-03-13 00:42:11.296754 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:42:11.296762 | orchestrator | Friday 13 March 2026 00:42:09 +0000 (0:00:00.298) 0:00:05.086 ********** 2026-03-13 00:42:11.296776 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-13 00:42:11.296785 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-13 00:42:11.296793 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-13 00:42:11.296802 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-13 00:42:11.296810 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-13 00:42:11.296819 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-13 00:42:11.296827 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-13 00:42:11.296835 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-13 00:42:11.296844 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-13 00:42:11.296852 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-13 00:42:11.296861 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-13 00:42:11.296869 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-13 00:42:11.296878 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-13 00:42:11.296886 | orchestrator | 2026-03-13 00:42:11.296895 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:42:11.296903 | orchestrator | Friday 13 March 2026 00:42:09 +0000 (0:00:00.382) 0:00:05.468 ********** 2026-03-13 00:42:11.296912 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:42:11.296920 | orchestrator | 2026-03-13 00:42:11.296929 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:42:11.296937 | orchestrator | Friday 13 March 2026 00:42:10 +0000 (0:00:00.181) 0:00:05.650 ********** 2026-03-13 00:42:11.296946 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:42:11.296954 | orchestrator | 2026-03-13 00:42:11.296963 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:42:11.296971 | orchestrator | Friday 13 March 2026 00:42:10 +0000 (0:00:00.209) 0:00:05.860 ********** 2026-03-13 00:42:11.296980 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:42:11.296994 | orchestrator | 2026-03-13 00:42:11.297002 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:42:11.297085 | orchestrator | Friday 13 March 2026 00:42:10 +0000 (0:00:00.203) 0:00:06.063 ********** 2026-03-13 00:42:11.297094 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:42:11.297103 | orchestrator | 2026-03-13 00:42:11.297112 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:42:11.297120 | orchestrator | Friday 13 March 2026 00:42:10 +0000 (0:00:00.183) 0:00:06.246 ********** 2026-03-13 00:42:11.297129 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:42:11.297137 | orchestrator | 2026-03-13 00:42:11.297151 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:42:11.297159 | orchestrator | Friday 13 March 2026 00:42:10 +0000 (0:00:00.171) 0:00:06.418 ********** 2026-03-13 00:42:11.297168 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:42:11.297176 | orchestrator | 2026-03-13 00:42:11.297185 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:42:11.297194 | orchestrator | Friday 13 March 2026 00:42:11 +0000 (0:00:00.183) 0:00:06.602 ********** 2026-03-13 00:42:11.297202 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:42:11.297211 | orchestrator | 2026-03-13 00:42:11.297226 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:42:17.689352 | orchestrator | Friday 13 March 2026 00:42:11 +0000 (0:00:00.171) 0:00:06.773 ********** 2026-03-13 00:42:17.689462 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:42:17.689479 | orchestrator | 2026-03-13 00:42:17.689492 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:42:17.689503 | orchestrator | Friday 13 March 2026 00:42:11 +0000 (0:00:00.176) 0:00:06.949 ********** 2026-03-13 00:42:17.689514 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-13 00:42:17.689526 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-13 00:42:17.689537 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-13 00:42:17.689548 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-13 00:42:17.689559 | orchestrator | 2026-03-13 00:42:17.689570 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:42:17.689580 | orchestrator | Friday 13 March 2026 00:42:12 +0000 (0:00:00.794) 0:00:07.744 ********** 2026-03-13 00:42:17.689591 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:42:17.689602 | orchestrator | 2026-03-13 00:42:17.689613 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:42:17.689624 | orchestrator | Friday 13 March 2026 00:42:12 +0000 (0:00:00.195) 0:00:07.940 ********** 2026-03-13 00:42:17.689634 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:42:17.689645 | orchestrator | 2026-03-13 00:42:17.689656 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:42:17.689667 | orchestrator | Friday 13 March 2026 00:42:12 +0000 (0:00:00.182) 0:00:08.123 ********** 2026-03-13 00:42:17.689677 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:42:17.689688 | orchestrator | 2026-03-13 00:42:17.689699 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:42:17.689709 | orchestrator | Friday 13 March 2026 00:42:12 +0000 (0:00:00.191) 0:00:08.314 ********** 2026-03-13 00:42:17.689720 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:42:17.689731 | orchestrator | 2026-03-13 00:42:17.689742 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-13 00:42:17.689753 | orchestrator | Friday 13 March 2026 00:42:13 +0000 (0:00:00.176) 0:00:08.491 ********** 2026-03-13 00:42:17.689764 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-03-13 00:42:17.689775 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-03-13 00:42:17.689785 | orchestrator | 2026-03-13 00:42:17.689796 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-13 00:42:17.689807 | orchestrator | Friday 13 March 2026 00:42:13 +0000 (0:00:00.155) 0:00:08.646 ********** 2026-03-13 00:42:17.689841 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:42:17.689852 | orchestrator | 2026-03-13 00:42:17.689863 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-13 00:42:17.689874 | orchestrator | Friday 13 March 2026 00:42:13 +0000 (0:00:00.128) 0:00:08.774 ********** 2026-03-13 00:42:17.689885 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:42:17.689895 | orchestrator | 2026-03-13 00:42:17.689908 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-13 00:42:17.689919 | orchestrator | Friday 13 March 2026 00:42:13 +0000 (0:00:00.122) 0:00:08.897 ********** 2026-03-13 00:42:17.689929 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:42:17.689940 | orchestrator | 2026-03-13 00:42:17.689950 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-13 00:42:17.689961 | orchestrator | Friday 13 March 2026 00:42:13 +0000 (0:00:00.115) 0:00:09.013 ********** 2026-03-13 00:42:17.689972 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:42:17.689982 | orchestrator | 2026-03-13 00:42:17.689993 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-13 00:42:17.690004 | orchestrator | Friday 13 March 2026 00:42:13 +0000 (0:00:00.121) 0:00:09.134 ********** 2026-03-13 00:42:17.690089 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b5494c86-4b11-53e5-88ab-5da9d8a68a1e'}}) 2026-03-13 00:42:17.690102 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b7299377-1bbd-5436-9d58-2dd820a08a2f'}}) 2026-03-13 00:42:17.690112 | orchestrator | 2026-03-13 00:42:17.690123 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-13 00:42:17.690134 | orchestrator | Friday 13 March 2026 00:42:13 +0000 (0:00:00.150) 0:00:09.285 ********** 2026-03-13 00:42:17.690145 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b5494c86-4b11-53e5-88ab-5da9d8a68a1e'}})  2026-03-13 00:42:17.690170 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b7299377-1bbd-5436-9d58-2dd820a08a2f'}})  2026-03-13 00:42:17.690181 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:42:17.690192 | orchestrator | 2026-03-13 00:42:17.690202 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-13 00:42:17.690213 | orchestrator | Friday 13 March 2026 00:42:13 +0000 (0:00:00.132) 0:00:09.418 ********** 2026-03-13 00:42:17.690223 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b5494c86-4b11-53e5-88ab-5da9d8a68a1e'}})  2026-03-13 00:42:17.690234 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b7299377-1bbd-5436-9d58-2dd820a08a2f'}})  2026-03-13 00:42:17.690245 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:42:17.690256 | orchestrator | 2026-03-13 00:42:17.690266 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-13 00:42:17.690278 | orchestrator | Friday 13 March 2026 00:42:14 +0000 (0:00:00.270) 0:00:09.688 ********** 2026-03-13 00:42:17.690288 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b5494c86-4b11-53e5-88ab-5da9d8a68a1e'}})  2026-03-13 00:42:17.690317 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b7299377-1bbd-5436-9d58-2dd820a08a2f'}})  2026-03-13 00:42:17.690330 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:42:17.690340 | orchestrator | 2026-03-13 00:42:17.690351 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-13 00:42:17.690362 | orchestrator | Friday 13 March 2026 00:42:14 +0000 (0:00:00.135) 0:00:09.823 ********** 2026-03-13 00:42:17.690373 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:42:17.690398 | orchestrator | 2026-03-13 00:42:17.690421 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-13 00:42:17.690432 | orchestrator | Friday 13 March 2026 00:42:14 +0000 (0:00:00.129) 0:00:09.953 ********** 2026-03-13 00:42:17.690443 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:42:17.690463 | orchestrator | 2026-03-13 00:42:17.690475 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-13 00:42:17.690485 | orchestrator | Friday 13 March 2026 00:42:14 +0000 (0:00:00.128) 0:00:10.082 ********** 2026-03-13 00:42:17.690496 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:42:17.690508 | orchestrator | 2026-03-13 00:42:17.690528 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-13 00:42:17.690539 | orchestrator | Friday 13 March 2026 00:42:14 +0000 (0:00:00.115) 0:00:10.198 ********** 2026-03-13 00:42:17.690550 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:42:17.690561 | orchestrator | 2026-03-13 00:42:17.690572 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-13 00:42:17.690583 | orchestrator | Friday 13 March 2026 00:42:14 +0000 (0:00:00.116) 0:00:10.314 ********** 2026-03-13 00:42:17.690593 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:42:17.690604 | orchestrator | 2026-03-13 00:42:17.690615 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-13 00:42:17.690626 | orchestrator | Friday 13 March 2026 00:42:14 +0000 (0:00:00.107) 0:00:10.422 ********** 2026-03-13 00:42:17.690637 | orchestrator | ok: [testbed-node-3] => { 2026-03-13 00:42:17.690648 | orchestrator |  "ceph_osd_devices": { 2026-03-13 00:42:17.690659 | orchestrator |  "sdb": { 2026-03-13 00:42:17.690670 | orchestrator |  "osd_lvm_uuid": "b5494c86-4b11-53e5-88ab-5da9d8a68a1e" 2026-03-13 00:42:17.690680 | orchestrator |  }, 2026-03-13 00:42:17.690691 | orchestrator |  "sdc": { 2026-03-13 00:42:17.690702 | orchestrator |  "osd_lvm_uuid": "b7299377-1bbd-5436-9d58-2dd820a08a2f" 2026-03-13 00:42:17.690713 | orchestrator |  } 2026-03-13 00:42:17.690724 | orchestrator |  } 2026-03-13 00:42:17.690734 | orchestrator | } 2026-03-13 00:42:17.690746 | orchestrator | 2026-03-13 00:42:17.690756 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-13 00:42:17.690767 | orchestrator | Friday 13 March 2026 00:42:15 +0000 (0:00:00.132) 0:00:10.555 ********** 2026-03-13 00:42:17.690778 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:42:17.690789 | orchestrator | 2026-03-13 00:42:17.690800 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-13 00:42:17.690811 | orchestrator | Friday 13 March 2026 00:42:15 +0000 (0:00:00.118) 0:00:10.673 ********** 2026-03-13 00:42:17.690822 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:42:17.690832 | orchestrator | 2026-03-13 00:42:17.690843 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-13 00:42:17.690854 | orchestrator | Friday 13 March 2026 00:42:15 +0000 (0:00:00.121) 0:00:10.794 ********** 2026-03-13 00:42:17.690865 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:42:17.690875 | orchestrator | 2026-03-13 00:42:17.690886 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-13 00:42:17.690897 | orchestrator | Friday 13 March 2026 00:42:15 +0000 (0:00:00.117) 0:00:10.912 ********** 2026-03-13 00:42:17.690908 | orchestrator | changed: [testbed-node-3] => { 2026-03-13 00:42:17.690918 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-13 00:42:17.690929 | orchestrator |  "ceph_osd_devices": { 2026-03-13 00:42:17.690940 | orchestrator |  "sdb": { 2026-03-13 00:42:17.690951 | orchestrator |  "osd_lvm_uuid": "b5494c86-4b11-53e5-88ab-5da9d8a68a1e" 2026-03-13 00:42:17.690962 | orchestrator |  }, 2026-03-13 00:42:17.690972 | orchestrator |  "sdc": { 2026-03-13 00:42:17.690983 | orchestrator |  "osd_lvm_uuid": "b7299377-1bbd-5436-9d58-2dd820a08a2f" 2026-03-13 00:42:17.690994 | orchestrator |  } 2026-03-13 00:42:17.691004 | orchestrator |  }, 2026-03-13 00:42:17.691038 | orchestrator |  "lvm_volumes": [ 2026-03-13 00:42:17.691049 | orchestrator |  { 2026-03-13 00:42:17.691059 | orchestrator |  "data": "osd-block-b5494c86-4b11-53e5-88ab-5da9d8a68a1e", 2026-03-13 00:42:17.691070 | orchestrator |  "data_vg": "ceph-b5494c86-4b11-53e5-88ab-5da9d8a68a1e" 2026-03-13 00:42:17.691088 | orchestrator |  }, 2026-03-13 00:42:17.691098 | orchestrator |  { 2026-03-13 00:42:17.691109 | orchestrator |  "data": "osd-block-b7299377-1bbd-5436-9d58-2dd820a08a2f", 2026-03-13 00:42:17.691120 | orchestrator |  "data_vg": "ceph-b7299377-1bbd-5436-9d58-2dd820a08a2f" 2026-03-13 00:42:17.691130 | orchestrator |  } 2026-03-13 00:42:17.691141 | orchestrator |  ] 2026-03-13 00:42:17.691152 | orchestrator |  } 2026-03-13 00:42:17.691163 | orchestrator | } 2026-03-13 00:42:17.691173 | orchestrator | 2026-03-13 00:42:17.691184 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-13 00:42:17.691195 | orchestrator | Friday 13 March 2026 00:42:15 +0000 (0:00:00.279) 0:00:11.191 ********** 2026-03-13 00:42:17.691206 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-13 00:42:17.691216 | orchestrator | 2026-03-13 00:42:17.691227 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-13 00:42:17.691238 | orchestrator | 2026-03-13 00:42:17.691248 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-13 00:42:17.691259 | orchestrator | Friday 13 March 2026 00:42:17 +0000 (0:00:01.545) 0:00:12.737 ********** 2026-03-13 00:42:17.691269 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-13 00:42:17.691280 | orchestrator | 2026-03-13 00:42:17.691297 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-13 00:42:17.691308 | orchestrator | Friday 13 March 2026 00:42:17 +0000 (0:00:00.218) 0:00:12.955 ********** 2026-03-13 00:42:17.691319 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:42:17.691330 | orchestrator | 2026-03-13 00:42:17.691347 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:42:24.542782 | orchestrator | Friday 13 March 2026 00:42:17 +0000 (0:00:00.217) 0:00:13.172 ********** 2026-03-13 00:42:24.542889 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-13 00:42:24.542904 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-13 00:42:24.542915 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-13 00:42:24.542925 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-13 00:42:24.542934 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-13 00:42:24.542945 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-13 00:42:24.542954 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-13 00:42:24.542969 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-13 00:42:24.542979 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-13 00:42:24.542990 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-13 00:42:24.542999 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-13 00:42:24.543009 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-13 00:42:24.543087 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-13 00:42:24.543099 | orchestrator | 2026-03-13 00:42:24.543110 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:42:24.543120 | orchestrator | Friday 13 March 2026 00:42:18 +0000 (0:00:00.357) 0:00:13.530 ********** 2026-03-13 00:42:24.543130 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:42:24.543140 | orchestrator | 2026-03-13 00:42:24.543150 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:42:24.543160 | orchestrator | Friday 13 March 2026 00:42:18 +0000 (0:00:00.170) 0:00:13.700 ********** 2026-03-13 00:42:24.543195 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:42:24.543205 | orchestrator | 2026-03-13 00:42:24.543215 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:42:24.543224 | orchestrator | Friday 13 March 2026 00:42:18 +0000 (0:00:00.173) 0:00:13.873 ********** 2026-03-13 00:42:24.543234 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:42:24.543243 | orchestrator | 2026-03-13 00:42:24.543252 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:42:24.543262 | orchestrator | Friday 13 March 2026 00:42:18 +0000 (0:00:00.181) 0:00:14.055 ********** 2026-03-13 00:42:24.543271 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:42:24.543280 | orchestrator | 2026-03-13 00:42:24.543290 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:42:24.543299 | orchestrator | Friday 13 March 2026 00:42:18 +0000 (0:00:00.170) 0:00:14.225 ********** 2026-03-13 00:42:24.543309 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:42:24.543318 | orchestrator | 2026-03-13 00:42:24.543329 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:42:24.543340 | orchestrator | Friday 13 March 2026 00:42:19 +0000 (0:00:00.430) 0:00:14.656 ********** 2026-03-13 00:42:24.543351 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:42:24.543362 | orchestrator | 2026-03-13 00:42:24.543373 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:42:24.543384 | orchestrator | Friday 13 March 2026 00:42:19 +0000 (0:00:00.192) 0:00:14.848 ********** 2026-03-13 00:42:24.543395 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:42:24.543406 | orchestrator | 2026-03-13 00:42:24.543416 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:42:24.543427 | orchestrator | Friday 13 March 2026 00:42:19 +0000 (0:00:00.200) 0:00:15.048 ********** 2026-03-13 00:42:24.543438 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:42:24.543449 | orchestrator | 2026-03-13 00:42:24.543459 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:42:24.543470 | orchestrator | Friday 13 March 2026 00:42:19 +0000 (0:00:00.195) 0:00:15.244 ********** 2026-03-13 00:42:24.543481 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b2c3f3a5-d054-4214-843e-d9b33fe0d233) 2026-03-13 00:42:24.543493 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b2c3f3a5-d054-4214-843e-d9b33fe0d233) 2026-03-13 00:42:24.543504 | orchestrator | 2026-03-13 00:42:24.543533 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:42:24.543544 | orchestrator | Friday 13 March 2026 00:42:20 +0000 (0:00:00.374) 0:00:15.619 ********** 2026-03-13 00:42:24.543555 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9a254b57-f2ae-4287-95a0-937fffba734e) 2026-03-13 00:42:24.543566 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9a254b57-f2ae-4287-95a0-937fffba734e) 2026-03-13 00:42:24.543577 | orchestrator | 2026-03-13 00:42:24.543589 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:42:24.543600 | orchestrator | Friday 13 March 2026 00:42:20 +0000 (0:00:00.413) 0:00:16.032 ********** 2026-03-13 00:42:24.543612 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5123065a-17ef-4227-8b29-db8d7701c704) 2026-03-13 00:42:24.543623 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5123065a-17ef-4227-8b29-db8d7701c704) 2026-03-13 00:42:24.543634 | orchestrator | 2026-03-13 00:42:24.543645 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:42:24.543670 | orchestrator | Friday 13 March 2026 00:42:20 +0000 (0:00:00.386) 0:00:16.419 ********** 2026-03-13 00:42:24.543681 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_e49f76b8-3d49-472e-b9d5-6b475ff66b1a) 2026-03-13 00:42:24.543693 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_e49f76b8-3d49-472e-b9d5-6b475ff66b1a) 2026-03-13 00:42:24.543709 | orchestrator | 2026-03-13 00:42:24.543737 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:42:24.543754 | orchestrator | Friday 13 March 2026 00:42:21 +0000 (0:00:00.368) 0:00:16.787 ********** 2026-03-13 00:42:24.543769 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-13 00:42:24.543785 | orchestrator | 2026-03-13 00:42:24.543883 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:42:24.543896 | orchestrator | Friday 13 March 2026 00:42:21 +0000 (0:00:00.269) 0:00:17.057 ********** 2026-03-13 00:42:24.543905 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-13 00:42:24.543915 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-13 00:42:24.543924 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-13 00:42:24.543933 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-13 00:42:24.543943 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-13 00:42:24.543952 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-13 00:42:24.543961 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-13 00:42:24.543971 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-13 00:42:24.543980 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-13 00:42:24.543989 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-13 00:42:24.543999 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-13 00:42:24.544008 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-13 00:42:24.544045 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-13 00:42:24.544061 | orchestrator | 2026-03-13 00:42:24.544071 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:42:24.544080 | orchestrator | Friday 13 March 2026 00:42:21 +0000 (0:00:00.330) 0:00:17.387 ********** 2026-03-13 00:42:24.544089 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:42:24.544099 | orchestrator | 2026-03-13 00:42:24.544108 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:42:24.544118 | orchestrator | Friday 13 March 2026 00:42:22 +0000 (0:00:00.479) 0:00:17.867 ********** 2026-03-13 00:42:24.544127 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:42:24.544137 | orchestrator | 2026-03-13 00:42:24.544146 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:42:24.544156 | orchestrator | Friday 13 March 2026 00:42:22 +0000 (0:00:00.159) 0:00:18.026 ********** 2026-03-13 00:42:24.544165 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:42:24.544174 | orchestrator | 2026-03-13 00:42:24.544184 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:42:24.544193 | orchestrator | Friday 13 March 2026 00:42:22 +0000 (0:00:00.143) 0:00:18.169 ********** 2026-03-13 00:42:24.544202 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:42:24.544212 | orchestrator | 2026-03-13 00:42:24.544221 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:42:24.544230 | orchestrator | Friday 13 March 2026 00:42:22 +0000 (0:00:00.148) 0:00:18.317 ********** 2026-03-13 00:42:24.544240 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:42:24.544249 | orchestrator | 2026-03-13 00:42:24.544258 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:42:24.544268 | orchestrator | Friday 13 March 2026 00:42:22 +0000 (0:00:00.155) 0:00:18.472 ********** 2026-03-13 00:42:24.544277 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:42:24.544296 | orchestrator | 2026-03-13 00:42:24.544313 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:42:24.544323 | orchestrator | Friday 13 March 2026 00:42:23 +0000 (0:00:00.188) 0:00:18.661 ********** 2026-03-13 00:42:24.544332 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:42:24.544342 | orchestrator | 2026-03-13 00:42:24.544351 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:42:24.544361 | orchestrator | Friday 13 March 2026 00:42:23 +0000 (0:00:00.180) 0:00:18.842 ********** 2026-03-13 00:42:24.544370 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:42:24.544380 | orchestrator | 2026-03-13 00:42:24.544389 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:42:24.544398 | orchestrator | Friday 13 March 2026 00:42:23 +0000 (0:00:00.183) 0:00:19.025 ********** 2026-03-13 00:42:24.544408 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-13 00:42:24.544418 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-13 00:42:24.544427 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-13 00:42:24.544437 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-13 00:42:24.544446 | orchestrator | 2026-03-13 00:42:24.544456 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:42:24.544465 | orchestrator | Friday 13 March 2026 00:42:24 +0000 (0:00:00.741) 0:00:19.767 ********** 2026-03-13 00:42:24.544475 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:42:30.286318 | orchestrator | 2026-03-13 00:42:30.286428 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:42:30.286442 | orchestrator | Friday 13 March 2026 00:42:24 +0000 (0:00:00.306) 0:00:20.073 ********** 2026-03-13 00:42:30.286450 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:42:30.286459 | orchestrator | 2026-03-13 00:42:30.286466 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:42:30.286474 | orchestrator | Friday 13 March 2026 00:42:24 +0000 (0:00:00.339) 0:00:20.413 ********** 2026-03-13 00:42:30.286481 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:42:30.286487 | orchestrator | 2026-03-13 00:42:30.286495 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:42:30.286502 | orchestrator | Friday 13 March 2026 00:42:25 +0000 (0:00:00.170) 0:00:20.584 ********** 2026-03-13 00:42:30.286509 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:42:30.286515 | orchestrator | 2026-03-13 00:42:30.286522 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-13 00:42:30.286529 | orchestrator | Friday 13 March 2026 00:42:25 +0000 (0:00:00.561) 0:00:21.145 ********** 2026-03-13 00:42:30.286536 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-03-13 00:42:30.286543 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-03-13 00:42:30.286550 | orchestrator | 2026-03-13 00:42:30.286568 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-13 00:42:30.286575 | orchestrator | Friday 13 March 2026 00:42:25 +0000 (0:00:00.146) 0:00:21.292 ********** 2026-03-13 00:42:30.286582 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:42:30.286589 | orchestrator | 2026-03-13 00:42:30.286596 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-13 00:42:30.286602 | orchestrator | Friday 13 March 2026 00:42:25 +0000 (0:00:00.108) 0:00:21.401 ********** 2026-03-13 00:42:30.286609 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:42:30.286616 | orchestrator | 2026-03-13 00:42:30.286622 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-13 00:42:30.286629 | orchestrator | Friday 13 March 2026 00:42:26 +0000 (0:00:00.116) 0:00:21.517 ********** 2026-03-13 00:42:30.286636 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:42:30.286643 | orchestrator | 2026-03-13 00:42:30.286650 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-13 00:42:30.286657 | orchestrator | Friday 13 March 2026 00:42:26 +0000 (0:00:00.115) 0:00:21.633 ********** 2026-03-13 00:42:30.286682 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:42:30.286690 | orchestrator | 2026-03-13 00:42:30.286696 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-13 00:42:30.286703 | orchestrator | Friday 13 March 2026 00:42:26 +0000 (0:00:00.117) 0:00:21.750 ********** 2026-03-13 00:42:30.286710 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '49707cb0-36ac-571b-bf56-7288c46886ca'}}) 2026-03-13 00:42:30.286718 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '798cee0b-732e-51b2-a8a3-29d8c2932297'}}) 2026-03-13 00:42:30.286724 | orchestrator | 2026-03-13 00:42:30.286731 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-13 00:42:30.286738 | orchestrator | Friday 13 March 2026 00:42:26 +0000 (0:00:00.139) 0:00:21.890 ********** 2026-03-13 00:42:30.286745 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '49707cb0-36ac-571b-bf56-7288c46886ca'}})  2026-03-13 00:42:30.286754 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '798cee0b-732e-51b2-a8a3-29d8c2932297'}})  2026-03-13 00:42:30.286760 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:42:30.286767 | orchestrator | 2026-03-13 00:42:30.286774 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-13 00:42:30.286781 | orchestrator | Friday 13 March 2026 00:42:26 +0000 (0:00:00.116) 0:00:22.006 ********** 2026-03-13 00:42:30.286787 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '49707cb0-36ac-571b-bf56-7288c46886ca'}})  2026-03-13 00:42:30.286794 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '798cee0b-732e-51b2-a8a3-29d8c2932297'}})  2026-03-13 00:42:30.286801 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:42:30.286808 | orchestrator | 2026-03-13 00:42:30.286815 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-13 00:42:30.286821 | orchestrator | Friday 13 March 2026 00:42:26 +0000 (0:00:00.109) 0:00:22.116 ********** 2026-03-13 00:42:30.286828 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '49707cb0-36ac-571b-bf56-7288c46886ca'}})  2026-03-13 00:42:30.286835 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '798cee0b-732e-51b2-a8a3-29d8c2932297'}})  2026-03-13 00:42:30.286842 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:42:30.286848 | orchestrator | 2026-03-13 00:42:30.286868 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-13 00:42:30.286876 | orchestrator | Friday 13 March 2026 00:42:26 +0000 (0:00:00.112) 0:00:22.229 ********** 2026-03-13 00:42:30.286884 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:42:30.286892 | orchestrator | 2026-03-13 00:42:30.286899 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-13 00:42:30.286907 | orchestrator | Friday 13 March 2026 00:42:26 +0000 (0:00:00.113) 0:00:22.342 ********** 2026-03-13 00:42:30.286915 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:42:30.286922 | orchestrator | 2026-03-13 00:42:30.286930 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-13 00:42:30.286938 | orchestrator | Friday 13 March 2026 00:42:26 +0000 (0:00:00.111) 0:00:22.453 ********** 2026-03-13 00:42:30.286959 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:42:30.286967 | orchestrator | 2026-03-13 00:42:30.286975 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-13 00:42:30.286982 | orchestrator | Friday 13 March 2026 00:42:27 +0000 (0:00:00.259) 0:00:22.713 ********** 2026-03-13 00:42:30.286990 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:42:30.286998 | orchestrator | 2026-03-13 00:42:30.287005 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-13 00:42:30.287013 | orchestrator | Friday 13 March 2026 00:42:27 +0000 (0:00:00.119) 0:00:22.833 ********** 2026-03-13 00:42:30.287021 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:42:30.287048 | orchestrator | 2026-03-13 00:42:30.287055 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-13 00:42:30.287061 | orchestrator | Friday 13 March 2026 00:42:27 +0000 (0:00:00.121) 0:00:22.954 ********** 2026-03-13 00:42:30.287068 | orchestrator | ok: [testbed-node-4] => { 2026-03-13 00:42:30.287075 | orchestrator |  "ceph_osd_devices": { 2026-03-13 00:42:30.287081 | orchestrator |  "sdb": { 2026-03-13 00:42:30.287088 | orchestrator |  "osd_lvm_uuid": "49707cb0-36ac-571b-bf56-7288c46886ca" 2026-03-13 00:42:30.287095 | orchestrator |  }, 2026-03-13 00:42:30.287101 | orchestrator |  "sdc": { 2026-03-13 00:42:30.287108 | orchestrator |  "osd_lvm_uuid": "798cee0b-732e-51b2-a8a3-29d8c2932297" 2026-03-13 00:42:30.287115 | orchestrator |  } 2026-03-13 00:42:30.287121 | orchestrator |  } 2026-03-13 00:42:30.287128 | orchestrator | } 2026-03-13 00:42:30.287135 | orchestrator | 2026-03-13 00:42:30.287142 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-13 00:42:30.287148 | orchestrator | Friday 13 March 2026 00:42:27 +0000 (0:00:00.133) 0:00:23.087 ********** 2026-03-13 00:42:30.287155 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:42:30.287162 | orchestrator | 2026-03-13 00:42:30.287168 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-13 00:42:30.287175 | orchestrator | Friday 13 March 2026 00:42:27 +0000 (0:00:00.120) 0:00:23.208 ********** 2026-03-13 00:42:30.287181 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:42:30.287188 | orchestrator | 2026-03-13 00:42:30.287194 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-13 00:42:30.287201 | orchestrator | Friday 13 March 2026 00:42:27 +0000 (0:00:00.141) 0:00:23.350 ********** 2026-03-13 00:42:30.287207 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:42:30.287214 | orchestrator | 2026-03-13 00:42:30.287221 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-13 00:42:30.287227 | orchestrator | Friday 13 March 2026 00:42:27 +0000 (0:00:00.132) 0:00:23.483 ********** 2026-03-13 00:42:30.287234 | orchestrator | changed: [testbed-node-4] => { 2026-03-13 00:42:30.287240 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-13 00:42:30.287247 | orchestrator |  "ceph_osd_devices": { 2026-03-13 00:42:30.287254 | orchestrator |  "sdb": { 2026-03-13 00:42:30.287260 | orchestrator |  "osd_lvm_uuid": "49707cb0-36ac-571b-bf56-7288c46886ca" 2026-03-13 00:42:30.287267 | orchestrator |  }, 2026-03-13 00:42:30.287274 | orchestrator |  "sdc": { 2026-03-13 00:42:30.287280 | orchestrator |  "osd_lvm_uuid": "798cee0b-732e-51b2-a8a3-29d8c2932297" 2026-03-13 00:42:30.287287 | orchestrator |  } 2026-03-13 00:42:30.287293 | orchestrator |  }, 2026-03-13 00:42:30.287300 | orchestrator |  "lvm_volumes": [ 2026-03-13 00:42:30.287306 | orchestrator |  { 2026-03-13 00:42:30.287313 | orchestrator |  "data": "osd-block-49707cb0-36ac-571b-bf56-7288c46886ca", 2026-03-13 00:42:30.287320 | orchestrator |  "data_vg": "ceph-49707cb0-36ac-571b-bf56-7288c46886ca" 2026-03-13 00:42:30.287326 | orchestrator |  }, 2026-03-13 00:42:30.287333 | orchestrator |  { 2026-03-13 00:42:30.287339 | orchestrator |  "data": "osd-block-798cee0b-732e-51b2-a8a3-29d8c2932297", 2026-03-13 00:42:30.287346 | orchestrator |  "data_vg": "ceph-798cee0b-732e-51b2-a8a3-29d8c2932297" 2026-03-13 00:42:30.287353 | orchestrator |  } 2026-03-13 00:42:30.287359 | orchestrator |  ] 2026-03-13 00:42:30.287366 | orchestrator |  } 2026-03-13 00:42:30.287373 | orchestrator | } 2026-03-13 00:42:30.287379 | orchestrator | 2026-03-13 00:42:30.287386 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-13 00:42:30.287392 | orchestrator | Friday 13 March 2026 00:42:28 +0000 (0:00:00.228) 0:00:23.711 ********** 2026-03-13 00:42:30.287399 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-13 00:42:30.287406 | orchestrator | 2026-03-13 00:42:30.287418 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-13 00:42:30.287425 | orchestrator | 2026-03-13 00:42:30.287431 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-13 00:42:30.287438 | orchestrator | Friday 13 March 2026 00:42:29 +0000 (0:00:01.017) 0:00:24.728 ********** 2026-03-13 00:42:30.287445 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-13 00:42:30.287451 | orchestrator | 2026-03-13 00:42:30.287458 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-13 00:42:30.287465 | orchestrator | Friday 13 March 2026 00:42:29 +0000 (0:00:00.543) 0:00:25.271 ********** 2026-03-13 00:42:30.287471 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:42:30.287478 | orchestrator | 2026-03-13 00:42:30.287484 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:42:30.287491 | orchestrator | Friday 13 March 2026 00:42:30 +0000 (0:00:00.214) 0:00:25.486 ********** 2026-03-13 00:42:30.287497 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-13 00:42:30.287504 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-13 00:42:30.287511 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-13 00:42:30.287517 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-13 00:42:30.287524 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-13 00:42:30.287535 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-13 00:42:37.206819 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-13 00:42:37.206895 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-13 00:42:37.206901 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-13 00:42:37.206906 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-13 00:42:37.206922 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-13 00:42:37.206927 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-13 00:42:37.206931 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-13 00:42:37.206935 | orchestrator | 2026-03-13 00:42:37.206940 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:42:37.206945 | orchestrator | Friday 13 March 2026 00:42:30 +0000 (0:00:00.361) 0:00:25.847 ********** 2026-03-13 00:42:37.206949 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:42:37.206954 | orchestrator | 2026-03-13 00:42:37.206958 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:42:37.206962 | orchestrator | Friday 13 March 2026 00:42:30 +0000 (0:00:00.176) 0:00:26.023 ********** 2026-03-13 00:42:37.206966 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:42:37.206969 | orchestrator | 2026-03-13 00:42:37.206973 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:42:37.206977 | orchestrator | Friday 13 March 2026 00:42:30 +0000 (0:00:00.137) 0:00:26.161 ********** 2026-03-13 00:42:37.206981 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:42:37.206984 | orchestrator | 2026-03-13 00:42:37.206988 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:42:37.206992 | orchestrator | Friday 13 March 2026 00:42:30 +0000 (0:00:00.197) 0:00:26.358 ********** 2026-03-13 00:42:37.206998 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:42:37.207001 | orchestrator | 2026-03-13 00:42:37.207005 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:42:37.207009 | orchestrator | Friday 13 March 2026 00:42:31 +0000 (0:00:00.184) 0:00:26.542 ********** 2026-03-13 00:42:37.207026 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:42:37.207030 | orchestrator | 2026-03-13 00:42:37.207046 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:42:37.207050 | orchestrator | Friday 13 March 2026 00:42:31 +0000 (0:00:00.176) 0:00:26.719 ********** 2026-03-13 00:42:37.207054 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:42:37.207058 | orchestrator | 2026-03-13 00:42:37.207061 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:42:37.207065 | orchestrator | Friday 13 March 2026 00:42:31 +0000 (0:00:00.173) 0:00:26.893 ********** 2026-03-13 00:42:37.207069 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:42:37.207072 | orchestrator | 2026-03-13 00:42:37.207076 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:42:37.207080 | orchestrator | Friday 13 March 2026 00:42:31 +0000 (0:00:00.171) 0:00:27.064 ********** 2026-03-13 00:42:37.207084 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:42:37.207087 | orchestrator | 2026-03-13 00:42:37.207091 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:42:37.207095 | orchestrator | Friday 13 March 2026 00:42:31 +0000 (0:00:00.186) 0:00:27.251 ********** 2026-03-13 00:42:37.207099 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_50cdff76-9cd5-47b7-8bb7-718e614446bf) 2026-03-13 00:42:37.207103 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_50cdff76-9cd5-47b7-8bb7-718e614446bf) 2026-03-13 00:42:37.207107 | orchestrator | 2026-03-13 00:42:37.207111 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:42:37.207114 | orchestrator | Friday 13 March 2026 00:42:32 +0000 (0:00:00.647) 0:00:27.898 ********** 2026-03-13 00:42:37.207118 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_173613da-cd5d-4175-9e2e-faf4092bf0a3) 2026-03-13 00:42:37.207122 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_173613da-cd5d-4175-9e2e-faf4092bf0a3) 2026-03-13 00:42:37.207125 | orchestrator | 2026-03-13 00:42:37.207129 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:42:37.207133 | orchestrator | Friday 13 March 2026 00:42:32 +0000 (0:00:00.404) 0:00:28.303 ********** 2026-03-13 00:42:37.207136 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_dc527160-e7af-4d74-be06-07ea7bd10a9b) 2026-03-13 00:42:37.207140 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_dc527160-e7af-4d74-be06-07ea7bd10a9b) 2026-03-13 00:42:37.207144 | orchestrator | 2026-03-13 00:42:37.207147 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:42:37.207151 | orchestrator | Friday 13 March 2026 00:42:33 +0000 (0:00:00.388) 0:00:28.692 ********** 2026-03-13 00:42:37.207155 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_2fefda09-8576-4844-bc1b-e9a7eb3ad8aa) 2026-03-13 00:42:37.207158 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_2fefda09-8576-4844-bc1b-e9a7eb3ad8aa) 2026-03-13 00:42:37.207162 | orchestrator | 2026-03-13 00:42:37.207166 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:42:37.207169 | orchestrator | Friday 13 March 2026 00:42:33 +0000 (0:00:00.407) 0:00:29.099 ********** 2026-03-13 00:42:37.207173 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-13 00:42:37.207177 | orchestrator | 2026-03-13 00:42:37.207180 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:42:37.207196 | orchestrator | Friday 13 March 2026 00:42:33 +0000 (0:00:00.283) 0:00:29.383 ********** 2026-03-13 00:42:37.207200 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-13 00:42:37.207203 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-13 00:42:37.207208 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-13 00:42:37.207211 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-13 00:42:37.207220 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-13 00:42:37.207224 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-13 00:42:37.207227 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-13 00:42:37.207231 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-13 00:42:37.207234 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-13 00:42:37.207238 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-13 00:42:37.207242 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-13 00:42:37.207245 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-13 00:42:37.207249 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-13 00:42:37.207253 | orchestrator | 2026-03-13 00:42:37.207256 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:42:37.207260 | orchestrator | Friday 13 March 2026 00:42:34 +0000 (0:00:00.363) 0:00:29.746 ********** 2026-03-13 00:42:37.207264 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:42:37.207267 | orchestrator | 2026-03-13 00:42:37.207271 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:42:37.207275 | orchestrator | Friday 13 March 2026 00:42:34 +0000 (0:00:00.169) 0:00:29.915 ********** 2026-03-13 00:42:37.207278 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:42:37.207282 | orchestrator | 2026-03-13 00:42:37.207286 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:42:37.207289 | orchestrator | Friday 13 March 2026 00:42:34 +0000 (0:00:00.179) 0:00:30.095 ********** 2026-03-13 00:42:37.207293 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:42:37.207296 | orchestrator | 2026-03-13 00:42:37.207300 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:42:37.207307 | orchestrator | Friday 13 March 2026 00:42:34 +0000 (0:00:00.177) 0:00:30.273 ********** 2026-03-13 00:42:37.207310 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:42:37.207314 | orchestrator | 2026-03-13 00:42:37.207318 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:42:37.207321 | orchestrator | Friday 13 March 2026 00:42:34 +0000 (0:00:00.184) 0:00:30.458 ********** 2026-03-13 00:42:37.207325 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:42:37.207329 | orchestrator | 2026-03-13 00:42:37.207332 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:42:37.207336 | orchestrator | Friday 13 March 2026 00:42:35 +0000 (0:00:00.204) 0:00:30.662 ********** 2026-03-13 00:42:37.207340 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:42:37.207343 | orchestrator | 2026-03-13 00:42:37.207347 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:42:37.207351 | orchestrator | Friday 13 March 2026 00:42:35 +0000 (0:00:00.461) 0:00:31.124 ********** 2026-03-13 00:42:37.207354 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:42:37.207358 | orchestrator | 2026-03-13 00:42:37.207361 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:42:37.207365 | orchestrator | Friday 13 March 2026 00:42:35 +0000 (0:00:00.173) 0:00:31.298 ********** 2026-03-13 00:42:37.207369 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:42:37.207372 | orchestrator | 2026-03-13 00:42:37.207376 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:42:37.207380 | orchestrator | Friday 13 March 2026 00:42:35 +0000 (0:00:00.166) 0:00:31.464 ********** 2026-03-13 00:42:37.207383 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-13 00:42:37.207391 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-13 00:42:37.207396 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-13 00:42:37.207400 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-13 00:42:37.207404 | orchestrator | 2026-03-13 00:42:37.207408 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:42:37.207412 | orchestrator | Friday 13 March 2026 00:42:36 +0000 (0:00:00.571) 0:00:32.036 ********** 2026-03-13 00:42:37.207417 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:42:37.207421 | orchestrator | 2026-03-13 00:42:37.207425 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:42:37.207430 | orchestrator | Friday 13 March 2026 00:42:36 +0000 (0:00:00.173) 0:00:32.210 ********** 2026-03-13 00:42:37.207434 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:42:37.207438 | orchestrator | 2026-03-13 00:42:37.207442 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:42:37.207447 | orchestrator | Friday 13 March 2026 00:42:36 +0000 (0:00:00.144) 0:00:32.354 ********** 2026-03-13 00:42:37.207451 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:42:37.207455 | orchestrator | 2026-03-13 00:42:37.207459 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:42:37.207463 | orchestrator | Friday 13 March 2026 00:42:37 +0000 (0:00:00.173) 0:00:32.527 ********** 2026-03-13 00:42:37.207468 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:42:37.207472 | orchestrator | 2026-03-13 00:42:37.207479 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-13 00:42:40.660623 | orchestrator | Friday 13 March 2026 00:42:37 +0000 (0:00:00.162) 0:00:32.690 ********** 2026-03-13 00:42:40.660696 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-03-13 00:42:40.660702 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-03-13 00:42:40.660707 | orchestrator | 2026-03-13 00:42:40.660712 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-13 00:42:40.660717 | orchestrator | Friday 13 March 2026 00:42:37 +0000 (0:00:00.138) 0:00:32.829 ********** 2026-03-13 00:42:40.660721 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:42:40.660725 | orchestrator | 2026-03-13 00:42:40.660729 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-13 00:42:40.660733 | orchestrator | Friday 13 March 2026 00:42:37 +0000 (0:00:00.103) 0:00:32.932 ********** 2026-03-13 00:42:40.660736 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:42:40.660740 | orchestrator | 2026-03-13 00:42:40.660744 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-13 00:42:40.660748 | orchestrator | Friday 13 March 2026 00:42:37 +0000 (0:00:00.095) 0:00:33.028 ********** 2026-03-13 00:42:40.660752 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:42:40.660755 | orchestrator | 2026-03-13 00:42:40.660760 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-13 00:42:40.660763 | orchestrator | Friday 13 March 2026 00:42:37 +0000 (0:00:00.225) 0:00:33.254 ********** 2026-03-13 00:42:40.660767 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:42:40.660772 | orchestrator | 2026-03-13 00:42:40.660775 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-13 00:42:40.660779 | orchestrator | Friday 13 March 2026 00:42:37 +0000 (0:00:00.095) 0:00:33.350 ********** 2026-03-13 00:42:40.660783 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '119e494c-61db-56d2-84c4-ae65d8356f6a'}}) 2026-03-13 00:42:40.660788 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5854fe4a-6d96-56a2-8017-73d7ac8736b8'}}) 2026-03-13 00:42:40.660791 | orchestrator | 2026-03-13 00:42:40.660795 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-13 00:42:40.660799 | orchestrator | Friday 13 March 2026 00:42:38 +0000 (0:00:00.145) 0:00:33.496 ********** 2026-03-13 00:42:40.660804 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '119e494c-61db-56d2-84c4-ae65d8356f6a'}})  2026-03-13 00:42:40.660825 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5854fe4a-6d96-56a2-8017-73d7ac8736b8'}})  2026-03-13 00:42:40.660829 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:42:40.660832 | orchestrator | 2026-03-13 00:42:40.660836 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-13 00:42:40.660840 | orchestrator | Friday 13 March 2026 00:42:38 +0000 (0:00:00.123) 0:00:33.619 ********** 2026-03-13 00:42:40.660844 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '119e494c-61db-56d2-84c4-ae65d8356f6a'}})  2026-03-13 00:42:40.660848 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5854fe4a-6d96-56a2-8017-73d7ac8736b8'}})  2026-03-13 00:42:40.660851 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:42:40.660855 | orchestrator | 2026-03-13 00:42:40.660859 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-13 00:42:40.660863 | orchestrator | Friday 13 March 2026 00:42:38 +0000 (0:00:00.133) 0:00:33.753 ********** 2026-03-13 00:42:40.660866 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '119e494c-61db-56d2-84c4-ae65d8356f6a'}})  2026-03-13 00:42:40.660870 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5854fe4a-6d96-56a2-8017-73d7ac8736b8'}})  2026-03-13 00:42:40.660874 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:42:40.660878 | orchestrator | 2026-03-13 00:42:40.660882 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-13 00:42:40.660885 | orchestrator | Friday 13 March 2026 00:42:38 +0000 (0:00:00.118) 0:00:33.871 ********** 2026-03-13 00:42:40.660889 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:42:40.660893 | orchestrator | 2026-03-13 00:42:40.660897 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-13 00:42:40.660900 | orchestrator | Friday 13 March 2026 00:42:38 +0000 (0:00:00.174) 0:00:34.046 ********** 2026-03-13 00:42:40.660904 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:42:40.660908 | orchestrator | 2026-03-13 00:42:40.660912 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-13 00:42:40.660915 | orchestrator | Friday 13 March 2026 00:42:38 +0000 (0:00:00.119) 0:00:34.166 ********** 2026-03-13 00:42:40.660919 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:42:40.660923 | orchestrator | 2026-03-13 00:42:40.660927 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-13 00:42:40.660930 | orchestrator | Friday 13 March 2026 00:42:38 +0000 (0:00:00.139) 0:00:34.305 ********** 2026-03-13 00:42:40.660934 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:42:40.660938 | orchestrator | 2026-03-13 00:42:40.660941 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-13 00:42:40.660945 | orchestrator | Friday 13 March 2026 00:42:38 +0000 (0:00:00.097) 0:00:34.403 ********** 2026-03-13 00:42:40.660949 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:42:40.660953 | orchestrator | 2026-03-13 00:42:40.660956 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-13 00:42:40.660960 | orchestrator | Friday 13 March 2026 00:42:39 +0000 (0:00:00.110) 0:00:34.513 ********** 2026-03-13 00:42:40.660964 | orchestrator | ok: [testbed-node-5] => { 2026-03-13 00:42:40.660968 | orchestrator |  "ceph_osd_devices": { 2026-03-13 00:42:40.660972 | orchestrator |  "sdb": { 2026-03-13 00:42:40.660986 | orchestrator |  "osd_lvm_uuid": "119e494c-61db-56d2-84c4-ae65d8356f6a" 2026-03-13 00:42:40.660990 | orchestrator |  }, 2026-03-13 00:42:40.660994 | orchestrator |  "sdc": { 2026-03-13 00:42:40.661010 | orchestrator |  "osd_lvm_uuid": "5854fe4a-6d96-56a2-8017-73d7ac8736b8" 2026-03-13 00:42:40.661014 | orchestrator |  } 2026-03-13 00:42:40.661018 | orchestrator |  } 2026-03-13 00:42:40.661022 | orchestrator | } 2026-03-13 00:42:40.661026 | orchestrator | 2026-03-13 00:42:40.661033 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-13 00:42:40.661052 | orchestrator | Friday 13 March 2026 00:42:39 +0000 (0:00:00.128) 0:00:34.642 ********** 2026-03-13 00:42:40.661056 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:42:40.661059 | orchestrator | 2026-03-13 00:42:40.661063 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-13 00:42:40.661067 | orchestrator | Friday 13 March 2026 00:42:39 +0000 (0:00:00.232) 0:00:34.875 ********** 2026-03-13 00:42:40.661071 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:42:40.661074 | orchestrator | 2026-03-13 00:42:40.661078 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-13 00:42:40.661082 | orchestrator | Friday 13 March 2026 00:42:39 +0000 (0:00:00.115) 0:00:34.991 ********** 2026-03-13 00:42:40.661086 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:42:40.661089 | orchestrator | 2026-03-13 00:42:40.661093 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-13 00:42:40.661097 | orchestrator | Friday 13 March 2026 00:42:39 +0000 (0:00:00.109) 0:00:35.101 ********** 2026-03-13 00:42:40.661101 | orchestrator | changed: [testbed-node-5] => { 2026-03-13 00:42:40.661104 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-13 00:42:40.661108 | orchestrator |  "ceph_osd_devices": { 2026-03-13 00:42:40.661112 | orchestrator |  "sdb": { 2026-03-13 00:42:40.661116 | orchestrator |  "osd_lvm_uuid": "119e494c-61db-56d2-84c4-ae65d8356f6a" 2026-03-13 00:42:40.661120 | orchestrator |  }, 2026-03-13 00:42:40.661124 | orchestrator |  "sdc": { 2026-03-13 00:42:40.661130 | orchestrator |  "osd_lvm_uuid": "5854fe4a-6d96-56a2-8017-73d7ac8736b8" 2026-03-13 00:42:40.661134 | orchestrator |  } 2026-03-13 00:42:40.661138 | orchestrator |  }, 2026-03-13 00:42:40.661142 | orchestrator |  "lvm_volumes": [ 2026-03-13 00:42:40.661145 | orchestrator |  { 2026-03-13 00:42:40.661149 | orchestrator |  "data": "osd-block-119e494c-61db-56d2-84c4-ae65d8356f6a", 2026-03-13 00:42:40.661153 | orchestrator |  "data_vg": "ceph-119e494c-61db-56d2-84c4-ae65d8356f6a" 2026-03-13 00:42:40.661157 | orchestrator |  }, 2026-03-13 00:42:40.661163 | orchestrator |  { 2026-03-13 00:42:40.661167 | orchestrator |  "data": "osd-block-5854fe4a-6d96-56a2-8017-73d7ac8736b8", 2026-03-13 00:42:40.661170 | orchestrator |  "data_vg": "ceph-5854fe4a-6d96-56a2-8017-73d7ac8736b8" 2026-03-13 00:42:40.661174 | orchestrator |  } 2026-03-13 00:42:40.661178 | orchestrator |  ] 2026-03-13 00:42:40.661182 | orchestrator |  } 2026-03-13 00:42:40.661186 | orchestrator | } 2026-03-13 00:42:40.661189 | orchestrator | 2026-03-13 00:42:40.661193 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-13 00:42:40.661198 | orchestrator | Friday 13 March 2026 00:42:39 +0000 (0:00:00.200) 0:00:35.301 ********** 2026-03-13 00:42:40.661202 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-13 00:42:40.661207 | orchestrator | 2026-03-13 00:42:40.661211 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 00:42:40.661215 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-13 00:42:40.661221 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-13 00:42:40.661225 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-13 00:42:40.661229 | orchestrator | 2026-03-13 00:42:40.661234 | orchestrator | 2026-03-13 00:42:40.661239 | orchestrator | 2026-03-13 00:42:40.661243 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 00:42:40.661248 | orchestrator | Friday 13 March 2026 00:42:40 +0000 (0:00:00.827) 0:00:36.129 ********** 2026-03-13 00:42:40.661256 | orchestrator | =============================================================================== 2026-03-13 00:42:40.661261 | orchestrator | Write configuration file ------------------------------------------------ 3.39s 2026-03-13 00:42:40.661265 | orchestrator | Add known links to the list of available block devices ------------------ 1.18s 2026-03-13 00:42:40.661269 | orchestrator | Add known partitions to the list of available block devices ------------- 1.08s 2026-03-13 00:42:40.661274 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.00s 2026-03-13 00:42:40.661278 | orchestrator | Add known partitions to the list of available block devices ------------- 0.79s 2026-03-13 00:42:40.661282 | orchestrator | Add known partitions to the list of available block devices ------------- 0.74s 2026-03-13 00:42:40.661286 | orchestrator | Print configuration data ------------------------------------------------ 0.71s 2026-03-13 00:42:40.661291 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2026-03-13 00:42:40.661295 | orchestrator | Get initial list of available block devices ----------------------------- 0.65s 2026-03-13 00:42:40.661299 | orchestrator | Add known links to the list of available block devices ------------------ 0.64s 2026-03-13 00:42:40.661303 | orchestrator | Add known partitions to the list of available block devices ------------- 0.57s 2026-03-13 00:42:40.661307 | orchestrator | Add known partitions to the list of available block devices ------------- 0.56s 2026-03-13 00:42:40.661312 | orchestrator | Add known links to the list of available block devices ------------------ 0.52s 2026-03-13 00:42:40.661319 | orchestrator | Set DB devices config data ---------------------------------------------- 0.51s 2026-03-13 00:42:40.879426 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.51s 2026-03-13 00:42:40.879499 | orchestrator | Add known links to the list of available block devices ------------------ 0.50s 2026-03-13 00:42:40.879505 | orchestrator | Add known partitions to the list of available block devices ------------- 0.48s 2026-03-13 00:42:40.879509 | orchestrator | Print WAL devices ------------------------------------------------------- 0.47s 2026-03-13 00:42:40.879514 | orchestrator | Add known partitions to the list of available block devices ------------- 0.46s 2026-03-13 00:42:40.879518 | orchestrator | Generate shared DB/WAL VG names ----------------------------------------- 0.46s 2026-03-13 00:43:03.106860 | orchestrator | 2026-03-13 00:43:03 | INFO  | Task 182e5116-0b4d-4000-a594-9dc894db1bc0 (sync inventory) is running in background. Output coming soon. 2026-03-13 00:43:28.846575 | orchestrator | 2026-03-13 00:43:04 | INFO  | Starting group_vars file reorganization 2026-03-13 00:43:28.846666 | orchestrator | 2026-03-13 00:43:04 | INFO  | Moved 0 file(s) to their respective directories 2026-03-13 00:43:28.846677 | orchestrator | 2026-03-13 00:43:04 | INFO  | Group_vars file reorganization completed 2026-03-13 00:43:28.846685 | orchestrator | 2026-03-13 00:43:07 | INFO  | Starting variable preparation from inventory 2026-03-13 00:43:28.846692 | orchestrator | 2026-03-13 00:43:10 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-03-13 00:43:28.846699 | orchestrator | 2026-03-13 00:43:10 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-03-13 00:43:28.846705 | orchestrator | 2026-03-13 00:43:10 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-03-13 00:43:28.846712 | orchestrator | 2026-03-13 00:43:10 | INFO  | 3 file(s) written, 6 host(s) processed 2026-03-13 00:43:28.846718 | orchestrator | 2026-03-13 00:43:10 | INFO  | Variable preparation completed 2026-03-13 00:43:28.846724 | orchestrator | 2026-03-13 00:43:11 | INFO  | Starting inventory overwrite handling 2026-03-13 00:43:28.846730 | orchestrator | 2026-03-13 00:43:11 | INFO  | Handling group overwrites in 99-overwrite 2026-03-13 00:43:28.846737 | orchestrator | 2026-03-13 00:43:11 | INFO  | Removing group frr:children from 60-generic 2026-03-13 00:43:28.846765 | orchestrator | 2026-03-13 00:43:11 | INFO  | Removing group netbird:children from 50-infrastructure 2026-03-13 00:43:28.846772 | orchestrator | 2026-03-13 00:43:11 | INFO  | Removing group ceph-mds from 50-ceph 2026-03-13 00:43:28.846779 | orchestrator | 2026-03-13 00:43:11 | INFO  | Removing group ceph-rgw from 50-ceph 2026-03-13 00:43:28.846785 | orchestrator | 2026-03-13 00:43:11 | INFO  | Handling group overwrites in 20-roles 2026-03-13 00:43:28.846791 | orchestrator | 2026-03-13 00:43:11 | INFO  | Removing group k3s_node from 50-infrastructure 2026-03-13 00:43:28.846797 | orchestrator | 2026-03-13 00:43:11 | INFO  | Removed 5 group(s) in total 2026-03-13 00:43:28.846803 | orchestrator | 2026-03-13 00:43:11 | INFO  | Inventory overwrite handling completed 2026-03-13 00:43:28.846810 | orchestrator | 2026-03-13 00:43:12 | INFO  | Starting merge of inventory files 2026-03-13 00:43:28.846816 | orchestrator | 2026-03-13 00:43:12 | INFO  | Inventory files merged successfully 2026-03-13 00:43:28.846822 | orchestrator | 2026-03-13 00:43:17 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-03-13 00:43:28.846828 | orchestrator | 2026-03-13 00:43:27 | INFO  | Successfully wrote ClusterShell configuration 2026-03-13 00:43:28.846835 | orchestrator | [master 615407f] 2026-03-13-00-43 2026-03-13 00:43:28.846842 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-03-13 00:43:30.871350 | orchestrator | 2026-03-13 00:43:30 | INFO  | Prepare task for execution of ceph-create-lvm-devices. 2026-03-13 00:43:30.922661 | orchestrator | 2026-03-13 00:43:30 | INFO  | Task 744380e1-b734-44a0-aa06-990f14d16ad5 (ceph-create-lvm-devices) was prepared for execution. 2026-03-13 00:43:30.922747 | orchestrator | 2026-03-13 00:43:30 | INFO  | It takes a moment until task 744380e1-b734-44a0-aa06-990f14d16ad5 (ceph-create-lvm-devices) has been started and output is visible here. 2026-03-13 00:43:41.073030 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-13 00:43:41.073248 | orchestrator | 2.16.14 2026-03-13 00:43:41.073287 | orchestrator | 2026-03-13 00:43:41.073302 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-13 00:43:41.073311 | orchestrator | 2026-03-13 00:43:41.073319 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-13 00:43:41.073333 | orchestrator | Friday 13 March 2026 00:43:34 +0000 (0:00:00.233) 0:00:00.233 ********** 2026-03-13 00:43:41.073345 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-13 00:43:41.073357 | orchestrator | 2026-03-13 00:43:41.073370 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-13 00:43:41.073380 | orchestrator | Friday 13 March 2026 00:43:35 +0000 (0:00:00.218) 0:00:00.452 ********** 2026-03-13 00:43:41.073391 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:43:41.073402 | orchestrator | 2026-03-13 00:43:41.073413 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:43:41.073426 | orchestrator | Friday 13 March 2026 00:43:35 +0000 (0:00:00.162) 0:00:00.615 ********** 2026-03-13 00:43:41.073438 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-13 00:43:41.073450 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-13 00:43:41.073463 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-13 00:43:41.073477 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-13 00:43:41.073485 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-13 00:43:41.073492 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-13 00:43:41.073500 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-13 00:43:41.073537 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-13 00:43:41.073549 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-13 00:43:41.073561 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-13 00:43:41.073572 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-13 00:43:41.073585 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-13 00:43:41.073611 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-13 00:43:41.073624 | orchestrator | 2026-03-13 00:43:41.073635 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:43:41.073642 | orchestrator | Friday 13 March 2026 00:43:35 +0000 (0:00:00.375) 0:00:00.990 ********** 2026-03-13 00:43:41.073650 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:43:41.073657 | orchestrator | 2026-03-13 00:43:41.073664 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:43:41.073671 | orchestrator | Friday 13 March 2026 00:43:35 +0000 (0:00:00.218) 0:00:01.208 ********** 2026-03-13 00:43:41.073678 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:43:41.073685 | orchestrator | 2026-03-13 00:43:41.073693 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:43:41.073700 | orchestrator | Friday 13 March 2026 00:43:35 +0000 (0:00:00.162) 0:00:01.371 ********** 2026-03-13 00:43:41.073707 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:43:41.073714 | orchestrator | 2026-03-13 00:43:41.073721 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:43:41.073728 | orchestrator | Friday 13 March 2026 00:43:36 +0000 (0:00:00.146) 0:00:01.518 ********** 2026-03-13 00:43:41.073735 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:43:41.073742 | orchestrator | 2026-03-13 00:43:41.073749 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:43:41.073756 | orchestrator | Friday 13 March 2026 00:43:36 +0000 (0:00:00.171) 0:00:01.690 ********** 2026-03-13 00:43:41.073763 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:43:41.073771 | orchestrator | 2026-03-13 00:43:41.073778 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:43:41.073785 | orchestrator | Friday 13 March 2026 00:43:36 +0000 (0:00:00.166) 0:00:01.856 ********** 2026-03-13 00:43:41.073792 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:43:41.073799 | orchestrator | 2026-03-13 00:43:41.073806 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:43:41.073814 | orchestrator | Friday 13 March 2026 00:43:36 +0000 (0:00:00.174) 0:00:02.030 ********** 2026-03-13 00:43:41.073821 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:43:41.073828 | orchestrator | 2026-03-13 00:43:41.073835 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:43:41.073842 | orchestrator | Friday 13 March 2026 00:43:36 +0000 (0:00:00.163) 0:00:02.193 ********** 2026-03-13 00:43:41.073849 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:43:41.073857 | orchestrator | 2026-03-13 00:43:41.073864 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:43:41.073872 | orchestrator | Friday 13 March 2026 00:43:36 +0000 (0:00:00.184) 0:00:02.378 ********** 2026-03-13 00:43:41.073879 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_97f75f20-579f-4518-b7ce-4d90969f977d) 2026-03-13 00:43:41.073887 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_97f75f20-579f-4518-b7ce-4d90969f977d) 2026-03-13 00:43:41.073894 | orchestrator | 2026-03-13 00:43:41.073902 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:43:41.073928 | orchestrator | Friday 13 March 2026 00:43:37 +0000 (0:00:00.373) 0:00:02.752 ********** 2026-03-13 00:43:41.073943 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_0e74aa03-0dd7-4a4d-94fb-6534b2ad29b5) 2026-03-13 00:43:41.073951 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_0e74aa03-0dd7-4a4d-94fb-6534b2ad29b5) 2026-03-13 00:43:41.073958 | orchestrator | 2026-03-13 00:43:41.073965 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:43:41.073972 | orchestrator | Friday 13 March 2026 00:43:37 +0000 (0:00:00.462) 0:00:03.214 ********** 2026-03-13 00:43:41.073979 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e5b6d572-8591-43aa-97f9-3b718c2d248a) 2026-03-13 00:43:41.073986 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e5b6d572-8591-43aa-97f9-3b718c2d248a) 2026-03-13 00:43:41.073993 | orchestrator | 2026-03-13 00:43:41.074001 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:43:41.074008 | orchestrator | Friday 13 March 2026 00:43:38 +0000 (0:00:00.535) 0:00:03.749 ********** 2026-03-13 00:43:41.074070 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b47ce045-806c-4f33-b887-31de2316680c) 2026-03-13 00:43:41.074081 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b47ce045-806c-4f33-b887-31de2316680c) 2026-03-13 00:43:41.074088 | orchestrator | 2026-03-13 00:43:41.074095 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:43:41.074128 | orchestrator | Friday 13 March 2026 00:43:39 +0000 (0:00:00.829) 0:00:04.579 ********** 2026-03-13 00:43:41.074141 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-13 00:43:41.074149 | orchestrator | 2026-03-13 00:43:41.074156 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:43:41.074163 | orchestrator | Friday 13 March 2026 00:43:39 +0000 (0:00:00.328) 0:00:04.908 ********** 2026-03-13 00:43:41.074170 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-13 00:43:41.074177 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-13 00:43:41.074185 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-13 00:43:41.074192 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-13 00:43:41.074199 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-13 00:43:41.074206 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-13 00:43:41.074213 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-13 00:43:41.074221 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-13 00:43:41.074228 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-13 00:43:41.074235 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-13 00:43:41.074242 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-13 00:43:41.074250 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-13 00:43:41.074257 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-13 00:43:41.074264 | orchestrator | 2026-03-13 00:43:41.074271 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:43:41.074278 | orchestrator | Friday 13 March 2026 00:43:39 +0000 (0:00:00.350) 0:00:05.259 ********** 2026-03-13 00:43:41.074285 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:43:41.074292 | orchestrator | 2026-03-13 00:43:41.074299 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:43:41.074306 | orchestrator | Friday 13 March 2026 00:43:40 +0000 (0:00:00.176) 0:00:05.436 ********** 2026-03-13 00:43:41.074320 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:43:41.074327 | orchestrator | 2026-03-13 00:43:41.074334 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:43:41.074341 | orchestrator | Friday 13 March 2026 00:43:40 +0000 (0:00:00.171) 0:00:05.608 ********** 2026-03-13 00:43:41.074348 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:43:41.074355 | orchestrator | 2026-03-13 00:43:41.074362 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:43:41.074370 | orchestrator | Friday 13 March 2026 00:43:40 +0000 (0:00:00.169) 0:00:05.777 ********** 2026-03-13 00:43:41.074377 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:43:41.074384 | orchestrator | 2026-03-13 00:43:41.074394 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:43:41.074406 | orchestrator | Friday 13 March 2026 00:43:40 +0000 (0:00:00.178) 0:00:05.955 ********** 2026-03-13 00:43:41.074417 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:43:41.074428 | orchestrator | 2026-03-13 00:43:41.074440 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:43:41.074461 | orchestrator | Friday 13 March 2026 00:43:40 +0000 (0:00:00.180) 0:00:06.136 ********** 2026-03-13 00:43:41.074473 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:43:41.074484 | orchestrator | 2026-03-13 00:43:41.074497 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:43:41.074510 | orchestrator | Friday 13 March 2026 00:43:40 +0000 (0:00:00.183) 0:00:06.319 ********** 2026-03-13 00:43:41.074522 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:43:41.074534 | orchestrator | 2026-03-13 00:43:41.074549 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:43:48.754833 | orchestrator | Friday 13 March 2026 00:43:41 +0000 (0:00:00.178) 0:00:06.498 ********** 2026-03-13 00:43:48.754922 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:43:48.754933 | orchestrator | 2026-03-13 00:43:48.754942 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:43:48.754951 | orchestrator | Friday 13 March 2026 00:43:41 +0000 (0:00:00.193) 0:00:06.692 ********** 2026-03-13 00:43:48.754959 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-13 00:43:48.754968 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-13 00:43:48.754976 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-13 00:43:48.754984 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-13 00:43:48.754991 | orchestrator | 2026-03-13 00:43:48.754999 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:43:48.755007 | orchestrator | Friday 13 March 2026 00:43:42 +0000 (0:00:00.949) 0:00:07.642 ********** 2026-03-13 00:43:48.755015 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:43:48.755022 | orchestrator | 2026-03-13 00:43:48.755031 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:43:48.755038 | orchestrator | Friday 13 March 2026 00:43:42 +0000 (0:00:00.187) 0:00:07.830 ********** 2026-03-13 00:43:48.755046 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:43:48.755054 | orchestrator | 2026-03-13 00:43:48.755061 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:43:48.755069 | orchestrator | Friday 13 March 2026 00:43:42 +0000 (0:00:00.184) 0:00:08.014 ********** 2026-03-13 00:43:48.755077 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:43:48.755085 | orchestrator | 2026-03-13 00:43:48.755093 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:43:48.755159 | orchestrator | Friday 13 March 2026 00:43:42 +0000 (0:00:00.205) 0:00:08.219 ********** 2026-03-13 00:43:48.755175 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:43:48.755188 | orchestrator | 2026-03-13 00:43:48.755201 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-13 00:43:48.755213 | orchestrator | Friday 13 March 2026 00:43:43 +0000 (0:00:00.211) 0:00:08.430 ********** 2026-03-13 00:43:48.755226 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:43:48.755264 | orchestrator | 2026-03-13 00:43:48.755280 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-13 00:43:48.755294 | orchestrator | Friday 13 March 2026 00:43:43 +0000 (0:00:00.167) 0:00:08.597 ********** 2026-03-13 00:43:48.755308 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b5494c86-4b11-53e5-88ab-5da9d8a68a1e'}}) 2026-03-13 00:43:48.755321 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b7299377-1bbd-5436-9d58-2dd820a08a2f'}}) 2026-03-13 00:43:48.755335 | orchestrator | 2026-03-13 00:43:48.755360 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-13 00:43:48.755368 | orchestrator | Friday 13 March 2026 00:43:43 +0000 (0:00:00.161) 0:00:08.759 ********** 2026-03-13 00:43:48.755377 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b5494c86-4b11-53e5-88ab-5da9d8a68a1e', 'data_vg': 'ceph-b5494c86-4b11-53e5-88ab-5da9d8a68a1e'}) 2026-03-13 00:43:48.755387 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b7299377-1bbd-5436-9d58-2dd820a08a2f', 'data_vg': 'ceph-b7299377-1bbd-5436-9d58-2dd820a08a2f'}) 2026-03-13 00:43:48.755396 | orchestrator | 2026-03-13 00:43:48.755405 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-13 00:43:48.755414 | orchestrator | Friday 13 March 2026 00:43:45 +0000 (0:00:01.893) 0:00:10.652 ********** 2026-03-13 00:43:48.755424 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b5494c86-4b11-53e5-88ab-5da9d8a68a1e', 'data_vg': 'ceph-b5494c86-4b11-53e5-88ab-5da9d8a68a1e'})  2026-03-13 00:43:48.755434 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b7299377-1bbd-5436-9d58-2dd820a08a2f', 'data_vg': 'ceph-b7299377-1bbd-5436-9d58-2dd820a08a2f'})  2026-03-13 00:43:48.755443 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:43:48.755452 | orchestrator | 2026-03-13 00:43:48.755461 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-13 00:43:48.755471 | orchestrator | Friday 13 March 2026 00:43:45 +0000 (0:00:00.160) 0:00:10.813 ********** 2026-03-13 00:43:48.755479 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b5494c86-4b11-53e5-88ab-5da9d8a68a1e', 'data_vg': 'ceph-b5494c86-4b11-53e5-88ab-5da9d8a68a1e'}) 2026-03-13 00:43:48.755489 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b7299377-1bbd-5436-9d58-2dd820a08a2f', 'data_vg': 'ceph-b7299377-1bbd-5436-9d58-2dd820a08a2f'}) 2026-03-13 00:43:48.755498 | orchestrator | 2026-03-13 00:43:48.755506 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-13 00:43:48.755515 | orchestrator | Friday 13 March 2026 00:43:46 +0000 (0:00:01.464) 0:00:12.277 ********** 2026-03-13 00:43:48.755524 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b5494c86-4b11-53e5-88ab-5da9d8a68a1e', 'data_vg': 'ceph-b5494c86-4b11-53e5-88ab-5da9d8a68a1e'})  2026-03-13 00:43:48.755534 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b7299377-1bbd-5436-9d58-2dd820a08a2f', 'data_vg': 'ceph-b7299377-1bbd-5436-9d58-2dd820a08a2f'})  2026-03-13 00:43:48.755543 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:43:48.755552 | orchestrator | 2026-03-13 00:43:48.755561 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-13 00:43:48.755570 | orchestrator | Friday 13 March 2026 00:43:46 +0000 (0:00:00.131) 0:00:12.408 ********** 2026-03-13 00:43:48.755594 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:43:48.755604 | orchestrator | 2026-03-13 00:43:48.755613 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-13 00:43:48.755620 | orchestrator | Friday 13 March 2026 00:43:47 +0000 (0:00:00.130) 0:00:12.539 ********** 2026-03-13 00:43:48.755628 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b5494c86-4b11-53e5-88ab-5da9d8a68a1e', 'data_vg': 'ceph-b5494c86-4b11-53e5-88ab-5da9d8a68a1e'})  2026-03-13 00:43:48.755636 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b7299377-1bbd-5436-9d58-2dd820a08a2f', 'data_vg': 'ceph-b7299377-1bbd-5436-9d58-2dd820a08a2f'})  2026-03-13 00:43:48.755651 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:43:48.755658 | orchestrator | 2026-03-13 00:43:48.755666 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-13 00:43:48.755674 | orchestrator | Friday 13 March 2026 00:43:47 +0000 (0:00:00.259) 0:00:12.799 ********** 2026-03-13 00:43:48.755681 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:43:48.755689 | orchestrator | 2026-03-13 00:43:48.755697 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-13 00:43:48.755704 | orchestrator | Friday 13 March 2026 00:43:47 +0000 (0:00:00.170) 0:00:12.969 ********** 2026-03-13 00:43:48.755712 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b5494c86-4b11-53e5-88ab-5da9d8a68a1e', 'data_vg': 'ceph-b5494c86-4b11-53e5-88ab-5da9d8a68a1e'})  2026-03-13 00:43:48.755720 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b7299377-1bbd-5436-9d58-2dd820a08a2f', 'data_vg': 'ceph-b7299377-1bbd-5436-9d58-2dd820a08a2f'})  2026-03-13 00:43:48.755728 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:43:48.755736 | orchestrator | 2026-03-13 00:43:48.755743 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-13 00:43:48.755751 | orchestrator | Friday 13 March 2026 00:43:47 +0000 (0:00:00.155) 0:00:13.125 ********** 2026-03-13 00:43:48.755758 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:43:48.755766 | orchestrator | 2026-03-13 00:43:48.755777 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-13 00:43:48.755791 | orchestrator | Friday 13 March 2026 00:43:47 +0000 (0:00:00.133) 0:00:13.258 ********** 2026-03-13 00:43:48.755804 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b5494c86-4b11-53e5-88ab-5da9d8a68a1e', 'data_vg': 'ceph-b5494c86-4b11-53e5-88ab-5da9d8a68a1e'})  2026-03-13 00:43:48.755817 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b7299377-1bbd-5436-9d58-2dd820a08a2f', 'data_vg': 'ceph-b7299377-1bbd-5436-9d58-2dd820a08a2f'})  2026-03-13 00:43:48.755828 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:43:48.755840 | orchestrator | 2026-03-13 00:43:48.755852 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-13 00:43:48.755866 | orchestrator | Friday 13 March 2026 00:43:47 +0000 (0:00:00.162) 0:00:13.421 ********** 2026-03-13 00:43:48.755879 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:43:48.755893 | orchestrator | 2026-03-13 00:43:48.755907 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-13 00:43:48.755920 | orchestrator | Friday 13 March 2026 00:43:48 +0000 (0:00:00.138) 0:00:13.559 ********** 2026-03-13 00:43:48.755933 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b5494c86-4b11-53e5-88ab-5da9d8a68a1e', 'data_vg': 'ceph-b5494c86-4b11-53e5-88ab-5da9d8a68a1e'})  2026-03-13 00:43:48.755946 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b7299377-1bbd-5436-9d58-2dd820a08a2f', 'data_vg': 'ceph-b7299377-1bbd-5436-9d58-2dd820a08a2f'})  2026-03-13 00:43:48.755954 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:43:48.755962 | orchestrator | 2026-03-13 00:43:48.755984 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-13 00:43:48.755992 | orchestrator | Friday 13 March 2026 00:43:48 +0000 (0:00:00.182) 0:00:13.742 ********** 2026-03-13 00:43:48.756000 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b5494c86-4b11-53e5-88ab-5da9d8a68a1e', 'data_vg': 'ceph-b5494c86-4b11-53e5-88ab-5da9d8a68a1e'})  2026-03-13 00:43:48.756017 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b7299377-1bbd-5436-9d58-2dd820a08a2f', 'data_vg': 'ceph-b7299377-1bbd-5436-9d58-2dd820a08a2f'})  2026-03-13 00:43:48.756026 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:43:48.756033 | orchestrator | 2026-03-13 00:43:48.756041 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-13 00:43:48.756056 | orchestrator | Friday 13 March 2026 00:43:48 +0000 (0:00:00.163) 0:00:13.905 ********** 2026-03-13 00:43:48.756064 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b5494c86-4b11-53e5-88ab-5da9d8a68a1e', 'data_vg': 'ceph-b5494c86-4b11-53e5-88ab-5da9d8a68a1e'})  2026-03-13 00:43:48.756072 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b7299377-1bbd-5436-9d58-2dd820a08a2f', 'data_vg': 'ceph-b7299377-1bbd-5436-9d58-2dd820a08a2f'})  2026-03-13 00:43:48.756080 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:43:48.756088 | orchestrator | 2026-03-13 00:43:48.756095 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-13 00:43:48.756103 | orchestrator | Friday 13 March 2026 00:43:48 +0000 (0:00:00.148) 0:00:14.054 ********** 2026-03-13 00:43:48.756139 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:43:48.756147 | orchestrator | 2026-03-13 00:43:48.756155 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-13 00:43:48.756169 | orchestrator | Friday 13 March 2026 00:43:48 +0000 (0:00:00.124) 0:00:14.178 ********** 2026-03-13 00:43:54.722979 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:43:54.723268 | orchestrator | 2026-03-13 00:43:54.723311 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-13 00:43:54.723331 | orchestrator | Friday 13 March 2026 00:43:48 +0000 (0:00:00.116) 0:00:14.294 ********** 2026-03-13 00:43:54.723349 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:43:54.723368 | orchestrator | 2026-03-13 00:43:54.723386 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-13 00:43:54.723404 | orchestrator | Friday 13 March 2026 00:43:48 +0000 (0:00:00.121) 0:00:14.416 ********** 2026-03-13 00:43:54.723423 | orchestrator | ok: [testbed-node-3] => { 2026-03-13 00:43:54.723443 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-13 00:43:54.723462 | orchestrator | } 2026-03-13 00:43:54.723482 | orchestrator | 2026-03-13 00:43:54.723526 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-13 00:43:54.723545 | orchestrator | Friday 13 March 2026 00:43:49 +0000 (0:00:00.282) 0:00:14.698 ********** 2026-03-13 00:43:54.723557 | orchestrator | ok: [testbed-node-3] => { 2026-03-13 00:43:54.723568 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-13 00:43:54.723580 | orchestrator | } 2026-03-13 00:43:54.723590 | orchestrator | 2026-03-13 00:43:54.723601 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-13 00:43:54.723612 | orchestrator | Friday 13 March 2026 00:43:49 +0000 (0:00:00.123) 0:00:14.822 ********** 2026-03-13 00:43:54.723623 | orchestrator | ok: [testbed-node-3] => { 2026-03-13 00:43:54.723633 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-13 00:43:54.723646 | orchestrator | } 2026-03-13 00:43:54.723658 | orchestrator | 2026-03-13 00:43:54.723670 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-13 00:43:54.723683 | orchestrator | Friday 13 March 2026 00:43:49 +0000 (0:00:00.126) 0:00:14.949 ********** 2026-03-13 00:43:54.723695 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:43:54.723708 | orchestrator | 2026-03-13 00:43:54.723718 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-13 00:43:54.723729 | orchestrator | Friday 13 March 2026 00:43:50 +0000 (0:00:00.664) 0:00:15.613 ********** 2026-03-13 00:43:54.723740 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:43:54.723750 | orchestrator | 2026-03-13 00:43:54.723761 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-13 00:43:54.723772 | orchestrator | Friday 13 March 2026 00:43:50 +0000 (0:00:00.523) 0:00:16.137 ********** 2026-03-13 00:43:54.723782 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:43:54.723793 | orchestrator | 2026-03-13 00:43:54.723803 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-13 00:43:54.723814 | orchestrator | Friday 13 March 2026 00:43:51 +0000 (0:00:00.500) 0:00:16.638 ********** 2026-03-13 00:43:54.723825 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:43:54.723835 | orchestrator | 2026-03-13 00:43:54.723874 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-13 00:43:54.723886 | orchestrator | Friday 13 March 2026 00:43:51 +0000 (0:00:00.141) 0:00:16.780 ********** 2026-03-13 00:43:54.723897 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:43:54.723908 | orchestrator | 2026-03-13 00:43:54.723919 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-13 00:43:54.723930 | orchestrator | Friday 13 March 2026 00:43:51 +0000 (0:00:00.123) 0:00:16.903 ********** 2026-03-13 00:43:54.723940 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:43:54.723951 | orchestrator | 2026-03-13 00:43:54.723961 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-13 00:43:54.723972 | orchestrator | Friday 13 March 2026 00:43:51 +0000 (0:00:00.100) 0:00:17.003 ********** 2026-03-13 00:43:54.723983 | orchestrator | ok: [testbed-node-3] => { 2026-03-13 00:43:54.723994 | orchestrator |  "vgs_report": { 2026-03-13 00:43:54.724004 | orchestrator |  "vg": [] 2026-03-13 00:43:54.724015 | orchestrator |  } 2026-03-13 00:43:54.724026 | orchestrator | } 2026-03-13 00:43:54.724036 | orchestrator | 2026-03-13 00:43:54.724047 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-13 00:43:54.724058 | orchestrator | Friday 13 March 2026 00:43:51 +0000 (0:00:00.124) 0:00:17.127 ********** 2026-03-13 00:43:54.724068 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:43:54.724079 | orchestrator | 2026-03-13 00:43:54.724089 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-13 00:43:54.724100 | orchestrator | Friday 13 March 2026 00:43:51 +0000 (0:00:00.127) 0:00:17.255 ********** 2026-03-13 00:43:54.724138 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:43:54.724160 | orchestrator | 2026-03-13 00:43:54.724176 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-13 00:43:54.724188 | orchestrator | Friday 13 March 2026 00:43:51 +0000 (0:00:00.137) 0:00:17.393 ********** 2026-03-13 00:43:54.724198 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:43:54.724209 | orchestrator | 2026-03-13 00:43:54.724220 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-13 00:43:54.724230 | orchestrator | Friday 13 March 2026 00:43:52 +0000 (0:00:00.284) 0:00:17.677 ********** 2026-03-13 00:43:54.724241 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:43:54.724251 | orchestrator | 2026-03-13 00:43:54.724262 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-13 00:43:54.724273 | orchestrator | Friday 13 March 2026 00:43:52 +0000 (0:00:00.124) 0:00:17.802 ********** 2026-03-13 00:43:54.724283 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:43:54.724294 | orchestrator | 2026-03-13 00:43:54.724304 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-13 00:43:54.724315 | orchestrator | Friday 13 March 2026 00:43:52 +0000 (0:00:00.115) 0:00:17.917 ********** 2026-03-13 00:43:54.724325 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:43:54.724336 | orchestrator | 2026-03-13 00:43:54.724346 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-13 00:43:54.724357 | orchestrator | Friday 13 March 2026 00:43:52 +0000 (0:00:00.126) 0:00:18.044 ********** 2026-03-13 00:43:54.724367 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:43:54.724378 | orchestrator | 2026-03-13 00:43:54.724388 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-13 00:43:54.724399 | orchestrator | Friday 13 March 2026 00:43:52 +0000 (0:00:00.150) 0:00:18.194 ********** 2026-03-13 00:43:54.724431 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:43:54.724443 | orchestrator | 2026-03-13 00:43:54.724453 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-13 00:43:54.724464 | orchestrator | Friday 13 March 2026 00:43:52 +0000 (0:00:00.118) 0:00:18.313 ********** 2026-03-13 00:43:54.724475 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:43:54.724485 | orchestrator | 2026-03-13 00:43:54.724502 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-13 00:43:54.724526 | orchestrator | Friday 13 March 2026 00:43:53 +0000 (0:00:00.115) 0:00:18.429 ********** 2026-03-13 00:43:54.724537 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:43:54.724548 | orchestrator | 2026-03-13 00:43:54.724558 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-13 00:43:54.724569 | orchestrator | Friday 13 March 2026 00:43:53 +0000 (0:00:00.115) 0:00:18.544 ********** 2026-03-13 00:43:54.724579 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:43:54.724590 | orchestrator | 2026-03-13 00:43:54.724600 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-13 00:43:54.724611 | orchestrator | Friday 13 March 2026 00:43:53 +0000 (0:00:00.115) 0:00:18.659 ********** 2026-03-13 00:43:54.724622 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:43:54.724632 | orchestrator | 2026-03-13 00:43:54.724643 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-13 00:43:54.724653 | orchestrator | Friday 13 March 2026 00:43:53 +0000 (0:00:00.176) 0:00:18.836 ********** 2026-03-13 00:43:54.724664 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:43:54.724674 | orchestrator | 2026-03-13 00:43:54.724685 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-13 00:43:54.724696 | orchestrator | Friday 13 March 2026 00:43:53 +0000 (0:00:00.148) 0:00:18.985 ********** 2026-03-13 00:43:54.724706 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:43:54.724717 | orchestrator | 2026-03-13 00:43:54.724727 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-13 00:43:54.724738 | orchestrator | Friday 13 March 2026 00:43:53 +0000 (0:00:00.124) 0:00:19.110 ********** 2026-03-13 00:43:54.724751 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b5494c86-4b11-53e5-88ab-5da9d8a68a1e', 'data_vg': 'ceph-b5494c86-4b11-53e5-88ab-5da9d8a68a1e'})  2026-03-13 00:43:54.724771 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b7299377-1bbd-5436-9d58-2dd820a08a2f', 'data_vg': 'ceph-b7299377-1bbd-5436-9d58-2dd820a08a2f'})  2026-03-13 00:43:54.724782 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:43:54.724793 | orchestrator | 2026-03-13 00:43:54.724803 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-13 00:43:54.724819 | orchestrator | Friday 13 March 2026 00:43:53 +0000 (0:00:00.277) 0:00:19.387 ********** 2026-03-13 00:43:54.724830 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b5494c86-4b11-53e5-88ab-5da9d8a68a1e', 'data_vg': 'ceph-b5494c86-4b11-53e5-88ab-5da9d8a68a1e'})  2026-03-13 00:43:54.724841 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b7299377-1bbd-5436-9d58-2dd820a08a2f', 'data_vg': 'ceph-b7299377-1bbd-5436-9d58-2dd820a08a2f'})  2026-03-13 00:43:54.724852 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:43:54.724862 | orchestrator | 2026-03-13 00:43:54.724873 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-13 00:43:54.724883 | orchestrator | Friday 13 March 2026 00:43:54 +0000 (0:00:00.192) 0:00:19.580 ********** 2026-03-13 00:43:54.724894 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b5494c86-4b11-53e5-88ab-5da9d8a68a1e', 'data_vg': 'ceph-b5494c86-4b11-53e5-88ab-5da9d8a68a1e'})  2026-03-13 00:43:54.724905 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b7299377-1bbd-5436-9d58-2dd820a08a2f', 'data_vg': 'ceph-b7299377-1bbd-5436-9d58-2dd820a08a2f'})  2026-03-13 00:43:54.724916 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:43:54.724926 | orchestrator | 2026-03-13 00:43:54.724937 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-13 00:43:54.724947 | orchestrator | Friday 13 March 2026 00:43:54 +0000 (0:00:00.182) 0:00:19.763 ********** 2026-03-13 00:43:54.724958 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b5494c86-4b11-53e5-88ab-5da9d8a68a1e', 'data_vg': 'ceph-b5494c86-4b11-53e5-88ab-5da9d8a68a1e'})  2026-03-13 00:43:54.724969 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b7299377-1bbd-5436-9d58-2dd820a08a2f', 'data_vg': 'ceph-b7299377-1bbd-5436-9d58-2dd820a08a2f'})  2026-03-13 00:43:54.724986 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:43:54.724996 | orchestrator | 2026-03-13 00:43:54.725007 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-13 00:43:54.725017 | orchestrator | Friday 13 March 2026 00:43:54 +0000 (0:00:00.170) 0:00:19.933 ********** 2026-03-13 00:43:54.725028 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b5494c86-4b11-53e5-88ab-5da9d8a68a1e', 'data_vg': 'ceph-b5494c86-4b11-53e5-88ab-5da9d8a68a1e'})  2026-03-13 00:43:54.725039 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b7299377-1bbd-5436-9d58-2dd820a08a2f', 'data_vg': 'ceph-b7299377-1bbd-5436-9d58-2dd820a08a2f'})  2026-03-13 00:43:54.725050 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:43:54.725060 | orchestrator | 2026-03-13 00:43:54.725071 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-13 00:43:54.725082 | orchestrator | Friday 13 March 2026 00:43:54 +0000 (0:00:00.159) 0:00:20.093 ********** 2026-03-13 00:43:54.725100 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b5494c86-4b11-53e5-88ab-5da9d8a68a1e', 'data_vg': 'ceph-b5494c86-4b11-53e5-88ab-5da9d8a68a1e'})  2026-03-13 00:43:59.768336 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b7299377-1bbd-5436-9d58-2dd820a08a2f', 'data_vg': 'ceph-b7299377-1bbd-5436-9d58-2dd820a08a2f'})  2026-03-13 00:43:59.768455 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:43:59.768472 | orchestrator | 2026-03-13 00:43:59.768480 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-13 00:43:59.768490 | orchestrator | Friday 13 March 2026 00:43:54 +0000 (0:00:00.145) 0:00:20.238 ********** 2026-03-13 00:43:59.768497 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b5494c86-4b11-53e5-88ab-5da9d8a68a1e', 'data_vg': 'ceph-b5494c86-4b11-53e5-88ab-5da9d8a68a1e'})  2026-03-13 00:43:59.768504 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b7299377-1bbd-5436-9d58-2dd820a08a2f', 'data_vg': 'ceph-b7299377-1bbd-5436-9d58-2dd820a08a2f'})  2026-03-13 00:43:59.768511 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:43:59.768517 | orchestrator | 2026-03-13 00:43:59.768524 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-13 00:43:59.768530 | orchestrator | Friday 13 March 2026 00:43:55 +0000 (0:00:00.233) 0:00:20.472 ********** 2026-03-13 00:43:59.768536 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b5494c86-4b11-53e5-88ab-5da9d8a68a1e', 'data_vg': 'ceph-b5494c86-4b11-53e5-88ab-5da9d8a68a1e'})  2026-03-13 00:43:59.768542 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b7299377-1bbd-5436-9d58-2dd820a08a2f', 'data_vg': 'ceph-b7299377-1bbd-5436-9d58-2dd820a08a2f'})  2026-03-13 00:43:59.768548 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:43:59.768554 | orchestrator | 2026-03-13 00:43:59.768561 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-13 00:43:59.768567 | orchestrator | Friday 13 March 2026 00:43:55 +0000 (0:00:00.162) 0:00:20.635 ********** 2026-03-13 00:43:59.768573 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:43:59.768580 | orchestrator | 2026-03-13 00:43:59.768586 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-13 00:43:59.768593 | orchestrator | Friday 13 March 2026 00:43:55 +0000 (0:00:00.567) 0:00:21.203 ********** 2026-03-13 00:43:59.768599 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:43:59.768605 | orchestrator | 2026-03-13 00:43:59.768611 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-13 00:43:59.768618 | orchestrator | Friday 13 March 2026 00:43:56 +0000 (0:00:00.504) 0:00:21.708 ********** 2026-03-13 00:43:59.768623 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:43:59.768630 | orchestrator | 2026-03-13 00:43:59.768636 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-13 00:43:59.768642 | orchestrator | Friday 13 March 2026 00:43:56 +0000 (0:00:00.140) 0:00:21.848 ********** 2026-03-13 00:43:59.768672 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-b5494c86-4b11-53e5-88ab-5da9d8a68a1e', 'vg_name': 'ceph-b5494c86-4b11-53e5-88ab-5da9d8a68a1e'}) 2026-03-13 00:43:59.768681 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-b7299377-1bbd-5436-9d58-2dd820a08a2f', 'vg_name': 'ceph-b7299377-1bbd-5436-9d58-2dd820a08a2f'}) 2026-03-13 00:43:59.768687 | orchestrator | 2026-03-13 00:43:59.768693 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-13 00:43:59.768699 | orchestrator | Friday 13 March 2026 00:43:56 +0000 (0:00:00.141) 0:00:21.989 ********** 2026-03-13 00:43:59.768721 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b5494c86-4b11-53e5-88ab-5da9d8a68a1e', 'data_vg': 'ceph-b5494c86-4b11-53e5-88ab-5da9d8a68a1e'})  2026-03-13 00:43:59.768728 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b7299377-1bbd-5436-9d58-2dd820a08a2f', 'data_vg': 'ceph-b7299377-1bbd-5436-9d58-2dd820a08a2f'})  2026-03-13 00:43:59.768734 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:43:59.768741 | orchestrator | 2026-03-13 00:43:59.768748 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-13 00:43:59.768754 | orchestrator | Friday 13 March 2026 00:43:56 +0000 (0:00:00.277) 0:00:22.267 ********** 2026-03-13 00:43:59.768761 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b5494c86-4b11-53e5-88ab-5da9d8a68a1e', 'data_vg': 'ceph-b5494c86-4b11-53e5-88ab-5da9d8a68a1e'})  2026-03-13 00:43:59.768767 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b7299377-1bbd-5436-9d58-2dd820a08a2f', 'data_vg': 'ceph-b7299377-1bbd-5436-9d58-2dd820a08a2f'})  2026-03-13 00:43:59.768773 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:43:59.768784 | orchestrator | 2026-03-13 00:43:59.768790 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-13 00:43:59.768796 | orchestrator | Friday 13 March 2026 00:43:56 +0000 (0:00:00.138) 0:00:22.406 ********** 2026-03-13 00:43:59.768803 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b5494c86-4b11-53e5-88ab-5da9d8a68a1e', 'data_vg': 'ceph-b5494c86-4b11-53e5-88ab-5da9d8a68a1e'})  2026-03-13 00:43:59.768809 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b7299377-1bbd-5436-9d58-2dd820a08a2f', 'data_vg': 'ceph-b7299377-1bbd-5436-9d58-2dd820a08a2f'})  2026-03-13 00:43:59.768815 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:43:59.768821 | orchestrator | 2026-03-13 00:43:59.768827 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-13 00:43:59.768833 | orchestrator | Friday 13 March 2026 00:43:57 +0000 (0:00:00.134) 0:00:22.540 ********** 2026-03-13 00:43:59.768855 | orchestrator | ok: [testbed-node-3] => { 2026-03-13 00:43:59.768861 | orchestrator |  "lvm_report": { 2026-03-13 00:43:59.768867 | orchestrator |  "lv": [ 2026-03-13 00:43:59.768873 | orchestrator |  { 2026-03-13 00:43:59.768880 | orchestrator |  "lv_name": "osd-block-b5494c86-4b11-53e5-88ab-5da9d8a68a1e", 2026-03-13 00:43:59.768889 | orchestrator |  "vg_name": "ceph-b5494c86-4b11-53e5-88ab-5da9d8a68a1e" 2026-03-13 00:43:59.768895 | orchestrator |  }, 2026-03-13 00:43:59.768901 | orchestrator |  { 2026-03-13 00:43:59.768908 | orchestrator |  "lv_name": "osd-block-b7299377-1bbd-5436-9d58-2dd820a08a2f", 2026-03-13 00:43:59.768915 | orchestrator |  "vg_name": "ceph-b7299377-1bbd-5436-9d58-2dd820a08a2f" 2026-03-13 00:43:59.768921 | orchestrator |  } 2026-03-13 00:43:59.768927 | orchestrator |  ], 2026-03-13 00:43:59.768933 | orchestrator |  "pv": [ 2026-03-13 00:43:59.768939 | orchestrator |  { 2026-03-13 00:43:59.768945 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-13 00:43:59.768951 | orchestrator |  "vg_name": "ceph-b5494c86-4b11-53e5-88ab-5da9d8a68a1e" 2026-03-13 00:43:59.768957 | orchestrator |  }, 2026-03-13 00:43:59.768962 | orchestrator |  { 2026-03-13 00:43:59.768975 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-13 00:43:59.768981 | orchestrator |  "vg_name": "ceph-b7299377-1bbd-5436-9d58-2dd820a08a2f" 2026-03-13 00:43:59.768987 | orchestrator |  } 2026-03-13 00:43:59.768994 | orchestrator |  ] 2026-03-13 00:43:59.769000 | orchestrator |  } 2026-03-13 00:43:59.769007 | orchestrator | } 2026-03-13 00:43:59.769013 | orchestrator | 2026-03-13 00:43:59.769019 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-13 00:43:59.769026 | orchestrator | 2026-03-13 00:43:59.769032 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-13 00:43:59.769038 | orchestrator | Friday 13 March 2026 00:43:57 +0000 (0:00:00.233) 0:00:22.773 ********** 2026-03-13 00:43:59.769044 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-13 00:43:59.769051 | orchestrator | 2026-03-13 00:43:59.769057 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-13 00:43:59.769063 | orchestrator | Friday 13 March 2026 00:43:57 +0000 (0:00:00.218) 0:00:22.992 ********** 2026-03-13 00:43:59.769069 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:43:59.769076 | orchestrator | 2026-03-13 00:43:59.769083 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:43:59.769089 | orchestrator | Friday 13 March 2026 00:43:57 +0000 (0:00:00.268) 0:00:23.261 ********** 2026-03-13 00:43:59.769108 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-13 00:43:59.769162 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-13 00:43:59.769170 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-13 00:43:59.769176 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-13 00:43:59.769182 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-13 00:43:59.769189 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-13 00:43:59.769195 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-13 00:43:59.769201 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-13 00:43:59.769207 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-13 00:43:59.769213 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-13 00:43:59.769219 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-13 00:43:59.769225 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-13 00:43:59.769232 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-13 00:43:59.769238 | orchestrator | 2026-03-13 00:43:59.769245 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:43:59.769251 | orchestrator | Friday 13 March 2026 00:43:58 +0000 (0:00:00.469) 0:00:23.731 ********** 2026-03-13 00:43:59.769258 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:43:59.769265 | orchestrator | 2026-03-13 00:43:59.769271 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:43:59.769277 | orchestrator | Friday 13 March 2026 00:43:58 +0000 (0:00:00.195) 0:00:23.926 ********** 2026-03-13 00:43:59.769283 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:43:59.769289 | orchestrator | 2026-03-13 00:43:59.769295 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:43:59.769301 | orchestrator | Friday 13 March 2026 00:43:58 +0000 (0:00:00.189) 0:00:24.116 ********** 2026-03-13 00:43:59.769307 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:43:59.769313 | orchestrator | 2026-03-13 00:43:59.769319 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:43:59.769333 | orchestrator | Friday 13 March 2026 00:43:59 +0000 (0:00:00.514) 0:00:24.631 ********** 2026-03-13 00:43:59.769339 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:43:59.769345 | orchestrator | 2026-03-13 00:43:59.769351 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:43:59.769357 | orchestrator | Friday 13 March 2026 00:43:59 +0000 (0:00:00.193) 0:00:24.824 ********** 2026-03-13 00:43:59.769364 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:43:59.769370 | orchestrator | 2026-03-13 00:43:59.769376 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:43:59.769383 | orchestrator | Friday 13 March 2026 00:43:59 +0000 (0:00:00.183) 0:00:25.007 ********** 2026-03-13 00:43:59.769390 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:43:59.769396 | orchestrator | 2026-03-13 00:43:59.769410 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:44:10.408515 | orchestrator | Friday 13 March 2026 00:43:59 +0000 (0:00:00.184) 0:00:25.192 ********** 2026-03-13 00:44:10.408610 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:44:10.408623 | orchestrator | 2026-03-13 00:44:10.408631 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:44:10.408637 | orchestrator | Friday 13 March 2026 00:43:59 +0000 (0:00:00.184) 0:00:25.377 ********** 2026-03-13 00:44:10.408644 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:44:10.408651 | orchestrator | 2026-03-13 00:44:10.408657 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:44:10.408664 | orchestrator | Friday 13 March 2026 00:44:00 +0000 (0:00:00.169) 0:00:25.546 ********** 2026-03-13 00:44:10.408671 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b2c3f3a5-d054-4214-843e-d9b33fe0d233) 2026-03-13 00:44:10.408678 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b2c3f3a5-d054-4214-843e-d9b33fe0d233) 2026-03-13 00:44:10.408685 | orchestrator | 2026-03-13 00:44:10.408691 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:44:10.408697 | orchestrator | Friday 13 March 2026 00:44:00 +0000 (0:00:00.400) 0:00:25.946 ********** 2026-03-13 00:44:10.408703 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9a254b57-f2ae-4287-95a0-937fffba734e) 2026-03-13 00:44:10.408709 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9a254b57-f2ae-4287-95a0-937fffba734e) 2026-03-13 00:44:10.408716 | orchestrator | 2026-03-13 00:44:10.408722 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:44:10.408728 | orchestrator | Friday 13 March 2026 00:44:00 +0000 (0:00:00.388) 0:00:26.335 ********** 2026-03-13 00:44:10.408734 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5123065a-17ef-4227-8b29-db8d7701c704) 2026-03-13 00:44:10.408741 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5123065a-17ef-4227-8b29-db8d7701c704) 2026-03-13 00:44:10.408747 | orchestrator | 2026-03-13 00:44:10.408754 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:44:10.408760 | orchestrator | Friday 13 March 2026 00:44:01 +0000 (0:00:00.378) 0:00:26.713 ********** 2026-03-13 00:44:10.408784 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_e49f76b8-3d49-472e-b9d5-6b475ff66b1a) 2026-03-13 00:44:10.408792 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_e49f76b8-3d49-472e-b9d5-6b475ff66b1a) 2026-03-13 00:44:10.408798 | orchestrator | 2026-03-13 00:44:10.408804 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:44:10.408811 | orchestrator | Friday 13 March 2026 00:44:01 +0000 (0:00:00.557) 0:00:27.271 ********** 2026-03-13 00:44:10.408817 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-13 00:44:10.408823 | orchestrator | 2026-03-13 00:44:10.408830 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:44:10.408836 | orchestrator | Friday 13 March 2026 00:44:02 +0000 (0:00:00.476) 0:00:27.748 ********** 2026-03-13 00:44:10.408864 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-13 00:44:10.408871 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-13 00:44:10.408877 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-13 00:44:10.408883 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-13 00:44:10.408889 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-13 00:44:10.408895 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-13 00:44:10.408901 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-13 00:44:10.408907 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-13 00:44:10.408914 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-13 00:44:10.408920 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-13 00:44:10.408927 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-13 00:44:10.408933 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-13 00:44:10.408939 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-13 00:44:10.408947 | orchestrator | 2026-03-13 00:44:10.408952 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:44:10.408958 | orchestrator | Friday 13 March 2026 00:44:03 +0000 (0:00:00.707) 0:00:28.455 ********** 2026-03-13 00:44:10.408964 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:44:10.408970 | orchestrator | 2026-03-13 00:44:10.408976 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:44:10.408982 | orchestrator | Friday 13 March 2026 00:44:03 +0000 (0:00:00.183) 0:00:28.638 ********** 2026-03-13 00:44:10.408988 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:44:10.408994 | orchestrator | 2026-03-13 00:44:10.409001 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:44:10.409007 | orchestrator | Friday 13 March 2026 00:44:03 +0000 (0:00:00.198) 0:00:28.837 ********** 2026-03-13 00:44:10.409013 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:44:10.409019 | orchestrator | 2026-03-13 00:44:10.409042 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:44:10.409048 | orchestrator | Friday 13 March 2026 00:44:03 +0000 (0:00:00.193) 0:00:29.030 ********** 2026-03-13 00:44:10.409054 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:44:10.409060 | orchestrator | 2026-03-13 00:44:10.409067 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:44:10.409073 | orchestrator | Friday 13 March 2026 00:44:03 +0000 (0:00:00.200) 0:00:29.231 ********** 2026-03-13 00:44:10.409079 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:44:10.409085 | orchestrator | 2026-03-13 00:44:10.409091 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:44:10.409097 | orchestrator | Friday 13 March 2026 00:44:03 +0000 (0:00:00.179) 0:00:29.410 ********** 2026-03-13 00:44:10.409103 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:44:10.409109 | orchestrator | 2026-03-13 00:44:10.409115 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:44:10.409121 | orchestrator | Friday 13 March 2026 00:44:04 +0000 (0:00:00.186) 0:00:29.597 ********** 2026-03-13 00:44:10.409176 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:44:10.409183 | orchestrator | 2026-03-13 00:44:10.409190 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:44:10.409196 | orchestrator | Friday 13 March 2026 00:44:04 +0000 (0:00:00.185) 0:00:29.783 ********** 2026-03-13 00:44:10.409211 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:44:10.409219 | orchestrator | 2026-03-13 00:44:10.409225 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:44:10.409232 | orchestrator | Friday 13 March 2026 00:44:04 +0000 (0:00:00.175) 0:00:29.958 ********** 2026-03-13 00:44:10.409238 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-13 00:44:10.409244 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-13 00:44:10.409251 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-13 00:44:10.409257 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-13 00:44:10.409263 | orchestrator | 2026-03-13 00:44:10.409269 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:44:10.409275 | orchestrator | Friday 13 March 2026 00:44:05 +0000 (0:00:00.780) 0:00:30.739 ********** 2026-03-13 00:44:10.409281 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:44:10.409286 | orchestrator | 2026-03-13 00:44:10.409290 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:44:10.409294 | orchestrator | Friday 13 March 2026 00:44:05 +0000 (0:00:00.208) 0:00:30.948 ********** 2026-03-13 00:44:10.409298 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:44:10.409301 | orchestrator | 2026-03-13 00:44:10.409305 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:44:10.409318 | orchestrator | Friday 13 March 2026 00:44:06 +0000 (0:00:00.521) 0:00:31.469 ********** 2026-03-13 00:44:10.409322 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:44:10.409326 | orchestrator | 2026-03-13 00:44:10.409332 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:44:10.409338 | orchestrator | Friday 13 March 2026 00:44:06 +0000 (0:00:00.182) 0:00:31.652 ********** 2026-03-13 00:44:10.409344 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:44:10.409350 | orchestrator | 2026-03-13 00:44:10.409356 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-13 00:44:10.409362 | orchestrator | Friday 13 March 2026 00:44:06 +0000 (0:00:00.238) 0:00:31.891 ********** 2026-03-13 00:44:10.409368 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:44:10.409374 | orchestrator | 2026-03-13 00:44:10.409380 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-13 00:44:10.409387 | orchestrator | Friday 13 March 2026 00:44:06 +0000 (0:00:00.149) 0:00:32.040 ********** 2026-03-13 00:44:10.409394 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '49707cb0-36ac-571b-bf56-7288c46886ca'}}) 2026-03-13 00:44:10.409401 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '798cee0b-732e-51b2-a8a3-29d8c2932297'}}) 2026-03-13 00:44:10.409408 | orchestrator | 2026-03-13 00:44:10.409414 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-13 00:44:10.409420 | orchestrator | Friday 13 March 2026 00:44:06 +0000 (0:00:00.227) 0:00:32.268 ********** 2026-03-13 00:44:10.409427 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-49707cb0-36ac-571b-bf56-7288c46886ca', 'data_vg': 'ceph-49707cb0-36ac-571b-bf56-7288c46886ca'}) 2026-03-13 00:44:10.409436 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-798cee0b-732e-51b2-a8a3-29d8c2932297', 'data_vg': 'ceph-798cee0b-732e-51b2-a8a3-29d8c2932297'}) 2026-03-13 00:44:10.409441 | orchestrator | 2026-03-13 00:44:10.409448 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-13 00:44:10.409454 | orchestrator | Friday 13 March 2026 00:44:08 +0000 (0:00:01.965) 0:00:34.234 ********** 2026-03-13 00:44:10.409460 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-49707cb0-36ac-571b-bf56-7288c46886ca', 'data_vg': 'ceph-49707cb0-36ac-571b-bf56-7288c46886ca'})  2026-03-13 00:44:10.409468 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-798cee0b-732e-51b2-a8a3-29d8c2932297', 'data_vg': 'ceph-798cee0b-732e-51b2-a8a3-29d8c2932297'})  2026-03-13 00:44:10.409481 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:44:10.409487 | orchestrator | 2026-03-13 00:44:10.409494 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-13 00:44:10.409500 | orchestrator | Friday 13 March 2026 00:44:08 +0000 (0:00:00.171) 0:00:34.405 ********** 2026-03-13 00:44:10.409506 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-49707cb0-36ac-571b-bf56-7288c46886ca', 'data_vg': 'ceph-49707cb0-36ac-571b-bf56-7288c46886ca'}) 2026-03-13 00:44:10.409521 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-798cee0b-732e-51b2-a8a3-29d8c2932297', 'data_vg': 'ceph-798cee0b-732e-51b2-a8a3-29d8c2932297'}) 2026-03-13 00:44:16.455614 | orchestrator | 2026-03-13 00:44:16.455678 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-13 00:44:16.455688 | orchestrator | Friday 13 March 2026 00:44:10 +0000 (0:00:01.499) 0:00:35.904 ********** 2026-03-13 00:44:16.455695 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-49707cb0-36ac-571b-bf56-7288c46886ca', 'data_vg': 'ceph-49707cb0-36ac-571b-bf56-7288c46886ca'})  2026-03-13 00:44:16.455703 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-798cee0b-732e-51b2-a8a3-29d8c2932297', 'data_vg': 'ceph-798cee0b-732e-51b2-a8a3-29d8c2932297'})  2026-03-13 00:44:16.455710 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:44:16.455716 | orchestrator | 2026-03-13 00:44:16.455723 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-13 00:44:16.455730 | orchestrator | Friday 13 March 2026 00:44:10 +0000 (0:00:00.163) 0:00:36.068 ********** 2026-03-13 00:44:16.455737 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:44:16.455743 | orchestrator | 2026-03-13 00:44:16.455750 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-13 00:44:16.455757 | orchestrator | Friday 13 March 2026 00:44:10 +0000 (0:00:00.156) 0:00:36.225 ********** 2026-03-13 00:44:16.455764 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-49707cb0-36ac-571b-bf56-7288c46886ca', 'data_vg': 'ceph-49707cb0-36ac-571b-bf56-7288c46886ca'})  2026-03-13 00:44:16.455771 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-798cee0b-732e-51b2-a8a3-29d8c2932297', 'data_vg': 'ceph-798cee0b-732e-51b2-a8a3-29d8c2932297'})  2026-03-13 00:44:16.455779 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:44:16.455785 | orchestrator | 2026-03-13 00:44:16.455792 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-13 00:44:16.455798 | orchestrator | Friday 13 March 2026 00:44:10 +0000 (0:00:00.178) 0:00:36.403 ********** 2026-03-13 00:44:16.455805 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:44:16.455812 | orchestrator | 2026-03-13 00:44:16.455819 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-13 00:44:16.455835 | orchestrator | Friday 13 March 2026 00:44:11 +0000 (0:00:00.144) 0:00:36.547 ********** 2026-03-13 00:44:16.455843 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-49707cb0-36ac-571b-bf56-7288c46886ca', 'data_vg': 'ceph-49707cb0-36ac-571b-bf56-7288c46886ca'})  2026-03-13 00:44:16.455850 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-798cee0b-732e-51b2-a8a3-29d8c2932297', 'data_vg': 'ceph-798cee0b-732e-51b2-a8a3-29d8c2932297'})  2026-03-13 00:44:16.455856 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:44:16.455862 | orchestrator | 2026-03-13 00:44:16.455869 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-13 00:44:16.455875 | orchestrator | Friday 13 March 2026 00:44:11 +0000 (0:00:00.401) 0:00:36.948 ********** 2026-03-13 00:44:16.455881 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:44:16.455898 | orchestrator | 2026-03-13 00:44:16.455904 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-13 00:44:16.455910 | orchestrator | Friday 13 March 2026 00:44:11 +0000 (0:00:00.176) 0:00:37.125 ********** 2026-03-13 00:44:16.455916 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-49707cb0-36ac-571b-bf56-7288c46886ca', 'data_vg': 'ceph-49707cb0-36ac-571b-bf56-7288c46886ca'})  2026-03-13 00:44:16.455944 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-798cee0b-732e-51b2-a8a3-29d8c2932297', 'data_vg': 'ceph-798cee0b-732e-51b2-a8a3-29d8c2932297'})  2026-03-13 00:44:16.455959 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:44:16.455966 | orchestrator | 2026-03-13 00:44:16.455972 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-13 00:44:16.455979 | orchestrator | Friday 13 March 2026 00:44:11 +0000 (0:00:00.181) 0:00:37.306 ********** 2026-03-13 00:44:16.455985 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:44:16.455993 | orchestrator | 2026-03-13 00:44:16.455999 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-13 00:44:16.456006 | orchestrator | Friday 13 March 2026 00:44:12 +0000 (0:00:00.169) 0:00:37.475 ********** 2026-03-13 00:44:16.456012 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-49707cb0-36ac-571b-bf56-7288c46886ca', 'data_vg': 'ceph-49707cb0-36ac-571b-bf56-7288c46886ca'})  2026-03-13 00:44:16.456019 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-798cee0b-732e-51b2-a8a3-29d8c2932297', 'data_vg': 'ceph-798cee0b-732e-51b2-a8a3-29d8c2932297'})  2026-03-13 00:44:16.456025 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:44:16.456032 | orchestrator | 2026-03-13 00:44:16.456038 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-13 00:44:16.456045 | orchestrator | Friday 13 March 2026 00:44:12 +0000 (0:00:00.190) 0:00:37.666 ********** 2026-03-13 00:44:16.456051 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-49707cb0-36ac-571b-bf56-7288c46886ca', 'data_vg': 'ceph-49707cb0-36ac-571b-bf56-7288c46886ca'})  2026-03-13 00:44:16.456057 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-798cee0b-732e-51b2-a8a3-29d8c2932297', 'data_vg': 'ceph-798cee0b-732e-51b2-a8a3-29d8c2932297'})  2026-03-13 00:44:16.456063 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:44:16.456070 | orchestrator | 2026-03-13 00:44:16.456076 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-13 00:44:16.456094 | orchestrator | Friday 13 March 2026 00:44:12 +0000 (0:00:00.217) 0:00:37.884 ********** 2026-03-13 00:44:16.456101 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-49707cb0-36ac-571b-bf56-7288c46886ca', 'data_vg': 'ceph-49707cb0-36ac-571b-bf56-7288c46886ca'})  2026-03-13 00:44:16.456107 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-798cee0b-732e-51b2-a8a3-29d8c2932297', 'data_vg': 'ceph-798cee0b-732e-51b2-a8a3-29d8c2932297'})  2026-03-13 00:44:16.456114 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:44:16.456120 | orchestrator | 2026-03-13 00:44:16.456127 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-13 00:44:16.456181 | orchestrator | Friday 13 March 2026 00:44:12 +0000 (0:00:00.185) 0:00:38.069 ********** 2026-03-13 00:44:16.456196 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:44:16.456202 | orchestrator | 2026-03-13 00:44:16.456208 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-13 00:44:16.456215 | orchestrator | Friday 13 March 2026 00:44:12 +0000 (0:00:00.183) 0:00:38.253 ********** 2026-03-13 00:44:16.456221 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:44:16.456227 | orchestrator | 2026-03-13 00:44:16.456233 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-13 00:44:16.456240 | orchestrator | Friday 13 March 2026 00:44:13 +0000 (0:00:00.181) 0:00:38.435 ********** 2026-03-13 00:44:16.456246 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:44:16.456253 | orchestrator | 2026-03-13 00:44:16.456260 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-13 00:44:16.456267 | orchestrator | Friday 13 March 2026 00:44:13 +0000 (0:00:00.177) 0:00:38.612 ********** 2026-03-13 00:44:16.456274 | orchestrator | ok: [testbed-node-4] => { 2026-03-13 00:44:16.456280 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-13 00:44:16.456292 | orchestrator | } 2026-03-13 00:44:16.456300 | orchestrator | 2026-03-13 00:44:16.456307 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-13 00:44:16.456313 | orchestrator | Friday 13 March 2026 00:44:13 +0000 (0:00:00.180) 0:00:38.793 ********** 2026-03-13 00:44:16.456320 | orchestrator | ok: [testbed-node-4] => { 2026-03-13 00:44:16.456326 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-13 00:44:16.456332 | orchestrator | } 2026-03-13 00:44:16.456338 | orchestrator | 2026-03-13 00:44:16.456349 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-13 00:44:16.456356 | orchestrator | Friday 13 March 2026 00:44:13 +0000 (0:00:00.155) 0:00:38.949 ********** 2026-03-13 00:44:16.456362 | orchestrator | ok: [testbed-node-4] => { 2026-03-13 00:44:16.456369 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-13 00:44:16.456375 | orchestrator | } 2026-03-13 00:44:16.456381 | orchestrator | 2026-03-13 00:44:16.456387 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-13 00:44:16.456394 | orchestrator | Friday 13 March 2026 00:44:13 +0000 (0:00:00.387) 0:00:39.337 ********** 2026-03-13 00:44:16.456400 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:44:16.456406 | orchestrator | 2026-03-13 00:44:16.456412 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-13 00:44:16.456418 | orchestrator | Friday 13 March 2026 00:44:14 +0000 (0:00:00.507) 0:00:39.844 ********** 2026-03-13 00:44:16.456425 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:44:16.456431 | orchestrator | 2026-03-13 00:44:16.456436 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-13 00:44:16.456440 | orchestrator | Friday 13 March 2026 00:44:14 +0000 (0:00:00.490) 0:00:40.335 ********** 2026-03-13 00:44:16.456444 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:44:16.456447 | orchestrator | 2026-03-13 00:44:16.456451 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-13 00:44:16.456455 | orchestrator | Friday 13 March 2026 00:44:15 +0000 (0:00:00.510) 0:00:40.845 ********** 2026-03-13 00:44:16.456458 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:44:16.456462 | orchestrator | 2026-03-13 00:44:16.456466 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-13 00:44:16.456469 | orchestrator | Friday 13 March 2026 00:44:15 +0000 (0:00:00.154) 0:00:41.000 ********** 2026-03-13 00:44:16.456473 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:44:16.456477 | orchestrator | 2026-03-13 00:44:16.456480 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-13 00:44:16.456484 | orchestrator | Friday 13 March 2026 00:44:15 +0000 (0:00:00.114) 0:00:41.114 ********** 2026-03-13 00:44:16.456488 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:44:16.456491 | orchestrator | 2026-03-13 00:44:16.456495 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-13 00:44:16.456499 | orchestrator | Friday 13 March 2026 00:44:15 +0000 (0:00:00.108) 0:00:41.222 ********** 2026-03-13 00:44:16.456502 | orchestrator | ok: [testbed-node-4] => { 2026-03-13 00:44:16.456506 | orchestrator |  "vgs_report": { 2026-03-13 00:44:16.456510 | orchestrator |  "vg": [] 2026-03-13 00:44:16.456513 | orchestrator |  } 2026-03-13 00:44:16.456517 | orchestrator | } 2026-03-13 00:44:16.456521 | orchestrator | 2026-03-13 00:44:16.456525 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-13 00:44:16.456529 | orchestrator | Friday 13 March 2026 00:44:15 +0000 (0:00:00.137) 0:00:41.360 ********** 2026-03-13 00:44:16.456532 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:44:16.456536 | orchestrator | 2026-03-13 00:44:16.456540 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-13 00:44:16.456550 | orchestrator | Friday 13 March 2026 00:44:16 +0000 (0:00:00.135) 0:00:41.495 ********** 2026-03-13 00:44:16.456554 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:44:16.456557 | orchestrator | 2026-03-13 00:44:16.456561 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-13 00:44:16.456569 | orchestrator | Friday 13 March 2026 00:44:16 +0000 (0:00:00.127) 0:00:41.623 ********** 2026-03-13 00:44:16.456573 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:44:16.456577 | orchestrator | 2026-03-13 00:44:16.456580 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-13 00:44:16.456584 | orchestrator | Friday 13 March 2026 00:44:16 +0000 (0:00:00.127) 0:00:41.750 ********** 2026-03-13 00:44:16.456588 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:44:16.456592 | orchestrator | 2026-03-13 00:44:16.456601 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-13 00:44:20.809407 | orchestrator | Friday 13 March 2026 00:44:16 +0000 (0:00:00.128) 0:00:41.878 ********** 2026-03-13 00:44:20.809487 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:44:20.809494 | orchestrator | 2026-03-13 00:44:20.809499 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-13 00:44:20.809504 | orchestrator | Friday 13 March 2026 00:44:16 +0000 (0:00:00.261) 0:00:42.140 ********** 2026-03-13 00:44:20.809507 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:44:20.809511 | orchestrator | 2026-03-13 00:44:20.809515 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-13 00:44:20.809520 | orchestrator | Friday 13 March 2026 00:44:16 +0000 (0:00:00.150) 0:00:42.290 ********** 2026-03-13 00:44:20.809524 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:44:20.809527 | orchestrator | 2026-03-13 00:44:20.809531 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-13 00:44:20.809535 | orchestrator | Friday 13 March 2026 00:44:16 +0000 (0:00:00.129) 0:00:42.420 ********** 2026-03-13 00:44:20.809539 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:44:20.809542 | orchestrator | 2026-03-13 00:44:20.809546 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-13 00:44:20.809550 | orchestrator | Friday 13 March 2026 00:44:17 +0000 (0:00:00.114) 0:00:42.535 ********** 2026-03-13 00:44:20.809554 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:44:20.809558 | orchestrator | 2026-03-13 00:44:20.809561 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-13 00:44:20.809565 | orchestrator | Friday 13 March 2026 00:44:17 +0000 (0:00:00.125) 0:00:42.660 ********** 2026-03-13 00:44:20.809569 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:44:20.809572 | orchestrator | 2026-03-13 00:44:20.809576 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-13 00:44:20.809580 | orchestrator | Friday 13 March 2026 00:44:17 +0000 (0:00:00.125) 0:00:42.786 ********** 2026-03-13 00:44:20.809583 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:44:20.809587 | orchestrator | 2026-03-13 00:44:20.809591 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-13 00:44:20.809594 | orchestrator | Friday 13 March 2026 00:44:17 +0000 (0:00:00.122) 0:00:42.908 ********** 2026-03-13 00:44:20.809598 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:44:20.809602 | orchestrator | 2026-03-13 00:44:20.809606 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-13 00:44:20.809610 | orchestrator | Friday 13 March 2026 00:44:17 +0000 (0:00:00.124) 0:00:43.032 ********** 2026-03-13 00:44:20.809614 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:44:20.809618 | orchestrator | 2026-03-13 00:44:20.809622 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-13 00:44:20.809625 | orchestrator | Friday 13 March 2026 00:44:17 +0000 (0:00:00.142) 0:00:43.175 ********** 2026-03-13 00:44:20.809629 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:44:20.809633 | orchestrator | 2026-03-13 00:44:20.809636 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-13 00:44:20.809640 | orchestrator | Friday 13 March 2026 00:44:17 +0000 (0:00:00.130) 0:00:43.306 ********** 2026-03-13 00:44:20.809645 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-49707cb0-36ac-571b-bf56-7288c46886ca', 'data_vg': 'ceph-49707cb0-36ac-571b-bf56-7288c46886ca'})  2026-03-13 00:44:20.809683 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-798cee0b-732e-51b2-a8a3-29d8c2932297', 'data_vg': 'ceph-798cee0b-732e-51b2-a8a3-29d8c2932297'})  2026-03-13 00:44:20.809687 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:44:20.809691 | orchestrator | 2026-03-13 00:44:20.809695 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-13 00:44:20.809698 | orchestrator | Friday 13 March 2026 00:44:18 +0000 (0:00:00.144) 0:00:43.450 ********** 2026-03-13 00:44:20.809702 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-49707cb0-36ac-571b-bf56-7288c46886ca', 'data_vg': 'ceph-49707cb0-36ac-571b-bf56-7288c46886ca'})  2026-03-13 00:44:20.809706 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-798cee0b-732e-51b2-a8a3-29d8c2932297', 'data_vg': 'ceph-798cee0b-732e-51b2-a8a3-29d8c2932297'})  2026-03-13 00:44:20.809710 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:44:20.809714 | orchestrator | 2026-03-13 00:44:20.809717 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-13 00:44:20.809722 | orchestrator | Friday 13 March 2026 00:44:18 +0000 (0:00:00.134) 0:00:43.584 ********** 2026-03-13 00:44:20.809728 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-49707cb0-36ac-571b-bf56-7288c46886ca', 'data_vg': 'ceph-49707cb0-36ac-571b-bf56-7288c46886ca'})  2026-03-13 00:44:20.809734 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-798cee0b-732e-51b2-a8a3-29d8c2932297', 'data_vg': 'ceph-798cee0b-732e-51b2-a8a3-29d8c2932297'})  2026-03-13 00:44:20.809743 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:44:20.809750 | orchestrator | 2026-03-13 00:44:20.809756 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-13 00:44:20.809762 | orchestrator | Friday 13 March 2026 00:44:18 +0000 (0:00:00.291) 0:00:43.876 ********** 2026-03-13 00:44:20.809768 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-49707cb0-36ac-571b-bf56-7288c46886ca', 'data_vg': 'ceph-49707cb0-36ac-571b-bf56-7288c46886ca'})  2026-03-13 00:44:20.809774 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-798cee0b-732e-51b2-a8a3-29d8c2932297', 'data_vg': 'ceph-798cee0b-732e-51b2-a8a3-29d8c2932297'})  2026-03-13 00:44:20.809780 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:44:20.809786 | orchestrator | 2026-03-13 00:44:20.809806 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-13 00:44:20.809812 | orchestrator | Friday 13 March 2026 00:44:18 +0000 (0:00:00.148) 0:00:44.024 ********** 2026-03-13 00:44:20.809818 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-49707cb0-36ac-571b-bf56-7288c46886ca', 'data_vg': 'ceph-49707cb0-36ac-571b-bf56-7288c46886ca'})  2026-03-13 00:44:20.809825 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-798cee0b-732e-51b2-a8a3-29d8c2932297', 'data_vg': 'ceph-798cee0b-732e-51b2-a8a3-29d8c2932297'})  2026-03-13 00:44:20.809831 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:44:20.809837 | orchestrator | 2026-03-13 00:44:20.809843 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-13 00:44:20.809849 | orchestrator | Friday 13 March 2026 00:44:18 +0000 (0:00:00.165) 0:00:44.190 ********** 2026-03-13 00:44:20.809855 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-49707cb0-36ac-571b-bf56-7288c46886ca', 'data_vg': 'ceph-49707cb0-36ac-571b-bf56-7288c46886ca'})  2026-03-13 00:44:20.809861 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-798cee0b-732e-51b2-a8a3-29d8c2932297', 'data_vg': 'ceph-798cee0b-732e-51b2-a8a3-29d8c2932297'})  2026-03-13 00:44:20.809867 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:44:20.809873 | orchestrator | 2026-03-13 00:44:20.809887 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-13 00:44:20.809893 | orchestrator | Friday 13 March 2026 00:44:18 +0000 (0:00:00.122) 0:00:44.312 ********** 2026-03-13 00:44:20.809899 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-49707cb0-36ac-571b-bf56-7288c46886ca', 'data_vg': 'ceph-49707cb0-36ac-571b-bf56-7288c46886ca'})  2026-03-13 00:44:20.809917 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-798cee0b-732e-51b2-a8a3-29d8c2932297', 'data_vg': 'ceph-798cee0b-732e-51b2-a8a3-29d8c2932297'})  2026-03-13 00:44:20.809924 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:44:20.809929 | orchestrator | 2026-03-13 00:44:20.809935 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-13 00:44:20.809941 | orchestrator | Friday 13 March 2026 00:44:19 +0000 (0:00:00.140) 0:00:44.453 ********** 2026-03-13 00:44:20.809948 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-49707cb0-36ac-571b-bf56-7288c46886ca', 'data_vg': 'ceph-49707cb0-36ac-571b-bf56-7288c46886ca'})  2026-03-13 00:44:20.809954 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-798cee0b-732e-51b2-a8a3-29d8c2932297', 'data_vg': 'ceph-798cee0b-732e-51b2-a8a3-29d8c2932297'})  2026-03-13 00:44:20.809961 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:44:20.809967 | orchestrator | 2026-03-13 00:44:20.809974 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-13 00:44:20.809981 | orchestrator | Friday 13 March 2026 00:44:19 +0000 (0:00:00.138) 0:00:44.591 ********** 2026-03-13 00:44:20.809987 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:44:20.809994 | orchestrator | 2026-03-13 00:44:20.810001 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-13 00:44:20.810008 | orchestrator | Friday 13 March 2026 00:44:19 +0000 (0:00:00.626) 0:00:45.218 ********** 2026-03-13 00:44:20.810054 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:44:20.810061 | orchestrator | 2026-03-13 00:44:20.810067 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-13 00:44:20.810074 | orchestrator | Friday 13 March 2026 00:44:20 +0000 (0:00:00.505) 0:00:45.723 ********** 2026-03-13 00:44:20.810081 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:44:20.810087 | orchestrator | 2026-03-13 00:44:20.810094 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-13 00:44:20.810100 | orchestrator | Friday 13 March 2026 00:44:20 +0000 (0:00:00.144) 0:00:45.868 ********** 2026-03-13 00:44:20.810107 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-49707cb0-36ac-571b-bf56-7288c46886ca', 'vg_name': 'ceph-49707cb0-36ac-571b-bf56-7288c46886ca'}) 2026-03-13 00:44:20.810116 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-798cee0b-732e-51b2-a8a3-29d8c2932297', 'vg_name': 'ceph-798cee0b-732e-51b2-a8a3-29d8c2932297'}) 2026-03-13 00:44:20.810122 | orchestrator | 2026-03-13 00:44:20.810129 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-13 00:44:20.810189 | orchestrator | Friday 13 March 2026 00:44:20 +0000 (0:00:00.151) 0:00:46.020 ********** 2026-03-13 00:44:20.810197 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-49707cb0-36ac-571b-bf56-7288c46886ca', 'data_vg': 'ceph-49707cb0-36ac-571b-bf56-7288c46886ca'})  2026-03-13 00:44:20.810203 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-798cee0b-732e-51b2-a8a3-29d8c2932297', 'data_vg': 'ceph-798cee0b-732e-51b2-a8a3-29d8c2932297'})  2026-03-13 00:44:20.810210 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:44:20.810217 | orchestrator | 2026-03-13 00:44:20.810224 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-13 00:44:20.810230 | orchestrator | Friday 13 March 2026 00:44:20 +0000 (0:00:00.150) 0:00:46.170 ********** 2026-03-13 00:44:20.810237 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-49707cb0-36ac-571b-bf56-7288c46886ca', 'data_vg': 'ceph-49707cb0-36ac-571b-bf56-7288c46886ca'})  2026-03-13 00:44:20.810252 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-798cee0b-732e-51b2-a8a3-29d8c2932297', 'data_vg': 'ceph-798cee0b-732e-51b2-a8a3-29d8c2932297'})  2026-03-13 00:44:26.219638 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:44:26.219738 | orchestrator | 2026-03-13 00:44:26.219765 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-13 00:44:26.219774 | orchestrator | Friday 13 March 2026 00:44:20 +0000 (0:00:00.137) 0:00:46.307 ********** 2026-03-13 00:44:26.219782 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-49707cb0-36ac-571b-bf56-7288c46886ca', 'data_vg': 'ceph-49707cb0-36ac-571b-bf56-7288c46886ca'})  2026-03-13 00:44:26.219790 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-798cee0b-732e-51b2-a8a3-29d8c2932297', 'data_vg': 'ceph-798cee0b-732e-51b2-a8a3-29d8c2932297'})  2026-03-13 00:44:26.219797 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:44:26.219803 | orchestrator | 2026-03-13 00:44:26.219810 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-13 00:44:26.219816 | orchestrator | Friday 13 March 2026 00:44:21 +0000 (0:00:00.140) 0:00:46.448 ********** 2026-03-13 00:44:26.219822 | orchestrator | ok: [testbed-node-4] => { 2026-03-13 00:44:26.219828 | orchestrator |  "lvm_report": { 2026-03-13 00:44:26.219836 | orchestrator |  "lv": [ 2026-03-13 00:44:26.219843 | orchestrator |  { 2026-03-13 00:44:26.219849 | orchestrator |  "lv_name": "osd-block-49707cb0-36ac-571b-bf56-7288c46886ca", 2026-03-13 00:44:26.219856 | orchestrator |  "vg_name": "ceph-49707cb0-36ac-571b-bf56-7288c46886ca" 2026-03-13 00:44:26.219862 | orchestrator |  }, 2026-03-13 00:44:26.219868 | orchestrator |  { 2026-03-13 00:44:26.219875 | orchestrator |  "lv_name": "osd-block-798cee0b-732e-51b2-a8a3-29d8c2932297", 2026-03-13 00:44:26.219881 | orchestrator |  "vg_name": "ceph-798cee0b-732e-51b2-a8a3-29d8c2932297" 2026-03-13 00:44:26.219887 | orchestrator |  } 2026-03-13 00:44:26.219893 | orchestrator |  ], 2026-03-13 00:44:26.219899 | orchestrator |  "pv": [ 2026-03-13 00:44:26.219905 | orchestrator |  { 2026-03-13 00:44:26.219911 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-13 00:44:26.219931 | orchestrator |  "vg_name": "ceph-49707cb0-36ac-571b-bf56-7288c46886ca" 2026-03-13 00:44:26.219937 | orchestrator |  }, 2026-03-13 00:44:26.219943 | orchestrator |  { 2026-03-13 00:44:26.219950 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-13 00:44:26.219956 | orchestrator |  "vg_name": "ceph-798cee0b-732e-51b2-a8a3-29d8c2932297" 2026-03-13 00:44:26.219962 | orchestrator |  } 2026-03-13 00:44:26.219967 | orchestrator |  ] 2026-03-13 00:44:26.219974 | orchestrator |  } 2026-03-13 00:44:26.219980 | orchestrator | } 2026-03-13 00:44:26.219988 | orchestrator | 2026-03-13 00:44:26.219994 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-13 00:44:26.220001 | orchestrator | 2026-03-13 00:44:26.220009 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-13 00:44:26.220015 | orchestrator | Friday 13 March 2026 00:44:21 +0000 (0:00:00.416) 0:00:46.865 ********** 2026-03-13 00:44:26.220022 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-13 00:44:26.220029 | orchestrator | 2026-03-13 00:44:26.220035 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-13 00:44:26.220041 | orchestrator | Friday 13 March 2026 00:44:21 +0000 (0:00:00.269) 0:00:47.134 ********** 2026-03-13 00:44:26.220048 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:44:26.220054 | orchestrator | 2026-03-13 00:44:26.220060 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:44:26.220066 | orchestrator | Friday 13 March 2026 00:44:21 +0000 (0:00:00.200) 0:00:47.335 ********** 2026-03-13 00:44:26.220072 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-13 00:44:26.220078 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-13 00:44:26.220084 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-13 00:44:26.220091 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-13 00:44:26.220103 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-13 00:44:26.220109 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-13 00:44:26.220115 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-13 00:44:26.220121 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-13 00:44:26.220127 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-13 00:44:26.220160 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-13 00:44:26.220167 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-13 00:44:26.220173 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-13 00:44:26.220179 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-13 00:44:26.220185 | orchestrator | 2026-03-13 00:44:26.220191 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:44:26.220197 | orchestrator | Friday 13 March 2026 00:44:22 +0000 (0:00:00.355) 0:00:47.690 ********** 2026-03-13 00:44:26.220203 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:44:26.220209 | orchestrator | 2026-03-13 00:44:26.220215 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:44:26.220221 | orchestrator | Friday 13 March 2026 00:44:22 +0000 (0:00:00.185) 0:00:47.875 ********** 2026-03-13 00:44:26.220228 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:44:26.220234 | orchestrator | 2026-03-13 00:44:26.220241 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:44:26.220264 | orchestrator | Friday 13 March 2026 00:44:22 +0000 (0:00:00.183) 0:00:48.059 ********** 2026-03-13 00:44:26.220271 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:44:26.220277 | orchestrator | 2026-03-13 00:44:26.220284 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:44:26.220290 | orchestrator | Friday 13 March 2026 00:44:22 +0000 (0:00:00.173) 0:00:48.233 ********** 2026-03-13 00:44:26.220296 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:44:26.220302 | orchestrator | 2026-03-13 00:44:26.220309 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:44:26.220316 | orchestrator | Friday 13 March 2026 00:44:22 +0000 (0:00:00.182) 0:00:48.415 ********** 2026-03-13 00:44:26.220323 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:44:26.220330 | orchestrator | 2026-03-13 00:44:26.220336 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:44:26.220342 | orchestrator | Friday 13 March 2026 00:44:23 +0000 (0:00:00.483) 0:00:48.899 ********** 2026-03-13 00:44:26.220349 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:44:26.220355 | orchestrator | 2026-03-13 00:44:26.220361 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:44:26.220368 | orchestrator | Friday 13 March 2026 00:44:23 +0000 (0:00:00.180) 0:00:49.079 ********** 2026-03-13 00:44:26.220375 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:44:26.220381 | orchestrator | 2026-03-13 00:44:26.220388 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:44:26.220395 | orchestrator | Friday 13 March 2026 00:44:23 +0000 (0:00:00.183) 0:00:49.262 ********** 2026-03-13 00:44:26.220401 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:44:26.220408 | orchestrator | 2026-03-13 00:44:26.220414 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:44:26.220421 | orchestrator | Friday 13 March 2026 00:44:24 +0000 (0:00:00.173) 0:00:49.436 ********** 2026-03-13 00:44:26.220428 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_50cdff76-9cd5-47b7-8bb7-718e614446bf) 2026-03-13 00:44:26.220435 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_50cdff76-9cd5-47b7-8bb7-718e614446bf) 2026-03-13 00:44:26.220448 | orchestrator | 2026-03-13 00:44:26.220454 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:44:26.220461 | orchestrator | Friday 13 March 2026 00:44:24 +0000 (0:00:00.394) 0:00:49.830 ********** 2026-03-13 00:44:26.220467 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_173613da-cd5d-4175-9e2e-faf4092bf0a3) 2026-03-13 00:44:26.220474 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_173613da-cd5d-4175-9e2e-faf4092bf0a3) 2026-03-13 00:44:26.220481 | orchestrator | 2026-03-13 00:44:26.220487 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:44:26.220494 | orchestrator | Friday 13 March 2026 00:44:24 +0000 (0:00:00.416) 0:00:50.246 ********** 2026-03-13 00:44:26.220500 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_dc527160-e7af-4d74-be06-07ea7bd10a9b) 2026-03-13 00:44:26.220507 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_dc527160-e7af-4d74-be06-07ea7bd10a9b) 2026-03-13 00:44:26.220513 | orchestrator | 2026-03-13 00:44:26.220520 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:44:26.220527 | orchestrator | Friday 13 March 2026 00:44:25 +0000 (0:00:00.403) 0:00:50.650 ********** 2026-03-13 00:44:26.220533 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_2fefda09-8576-4844-bc1b-e9a7eb3ad8aa) 2026-03-13 00:44:26.220540 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_2fefda09-8576-4844-bc1b-e9a7eb3ad8aa) 2026-03-13 00:44:26.220546 | orchestrator | 2026-03-13 00:44:26.220552 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-13 00:44:26.220559 | orchestrator | Friday 13 March 2026 00:44:25 +0000 (0:00:00.400) 0:00:51.050 ********** 2026-03-13 00:44:26.220565 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-13 00:44:26.220571 | orchestrator | 2026-03-13 00:44:26.220578 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:44:26.220585 | orchestrator | Friday 13 March 2026 00:44:25 +0000 (0:00:00.291) 0:00:51.342 ********** 2026-03-13 00:44:26.220592 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-13 00:44:26.220598 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-13 00:44:26.220605 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-13 00:44:26.220612 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-13 00:44:26.220619 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-13 00:44:26.220625 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-13 00:44:26.220632 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-13 00:44:26.220638 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-13 00:44:26.220644 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-13 00:44:26.220650 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-13 00:44:26.220656 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-13 00:44:26.220667 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-13 00:44:34.521627 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-13 00:44:34.521686 | orchestrator | 2026-03-13 00:44:34.521692 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:44:34.521697 | orchestrator | Friday 13 March 2026 00:44:26 +0000 (0:00:00.377) 0:00:51.720 ********** 2026-03-13 00:44:34.521714 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:44:34.521718 | orchestrator | 2026-03-13 00:44:34.521722 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:44:34.521726 | orchestrator | Friday 13 March 2026 00:44:26 +0000 (0:00:00.181) 0:00:51.901 ********** 2026-03-13 00:44:34.521730 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:44:34.521734 | orchestrator | 2026-03-13 00:44:34.521763 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:44:34.521767 | orchestrator | Friday 13 March 2026 00:44:26 +0000 (0:00:00.473) 0:00:52.375 ********** 2026-03-13 00:44:34.521771 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:44:34.521775 | orchestrator | 2026-03-13 00:44:34.521779 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:44:34.521783 | orchestrator | Friday 13 March 2026 00:44:27 +0000 (0:00:00.184) 0:00:52.560 ********** 2026-03-13 00:44:34.521786 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:44:34.521790 | orchestrator | 2026-03-13 00:44:34.521794 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:44:34.521798 | orchestrator | Friday 13 March 2026 00:44:27 +0000 (0:00:00.191) 0:00:52.751 ********** 2026-03-13 00:44:34.521801 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:44:34.521805 | orchestrator | 2026-03-13 00:44:34.521809 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:44:34.521813 | orchestrator | Friday 13 March 2026 00:44:27 +0000 (0:00:00.192) 0:00:52.944 ********** 2026-03-13 00:44:34.521816 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:44:34.521820 | orchestrator | 2026-03-13 00:44:34.521827 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:44:34.521830 | orchestrator | Friday 13 March 2026 00:44:27 +0000 (0:00:00.176) 0:00:53.121 ********** 2026-03-13 00:44:34.521834 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:44:34.521838 | orchestrator | 2026-03-13 00:44:34.521842 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:44:34.521845 | orchestrator | Friday 13 March 2026 00:44:27 +0000 (0:00:00.170) 0:00:53.291 ********** 2026-03-13 00:44:34.521849 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:44:34.521853 | orchestrator | 2026-03-13 00:44:34.521856 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:44:34.521860 | orchestrator | Friday 13 March 2026 00:44:28 +0000 (0:00:00.194) 0:00:53.485 ********** 2026-03-13 00:44:34.521864 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-13 00:44:34.521868 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-13 00:44:34.521872 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-13 00:44:34.521876 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-13 00:44:34.521880 | orchestrator | 2026-03-13 00:44:34.521884 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:44:34.521888 | orchestrator | Friday 13 March 2026 00:44:28 +0000 (0:00:00.608) 0:00:54.094 ********** 2026-03-13 00:44:34.521892 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:44:34.521904 | orchestrator | 2026-03-13 00:44:34.521909 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:44:34.521912 | orchestrator | Friday 13 March 2026 00:44:28 +0000 (0:00:00.179) 0:00:54.273 ********** 2026-03-13 00:44:34.521916 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:44:34.521920 | orchestrator | 2026-03-13 00:44:34.521923 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:44:34.521927 | orchestrator | Friday 13 March 2026 00:44:29 +0000 (0:00:00.198) 0:00:54.471 ********** 2026-03-13 00:44:34.521931 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:44:34.521935 | orchestrator | 2026-03-13 00:44:34.521938 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-13 00:44:34.521942 | orchestrator | Friday 13 March 2026 00:44:29 +0000 (0:00:00.168) 0:00:54.640 ********** 2026-03-13 00:44:34.521949 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:44:34.521953 | orchestrator | 2026-03-13 00:44:34.521957 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-13 00:44:34.521960 | orchestrator | Friday 13 March 2026 00:44:29 +0000 (0:00:00.186) 0:00:54.827 ********** 2026-03-13 00:44:34.521964 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:44:34.521968 | orchestrator | 2026-03-13 00:44:34.521971 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-13 00:44:34.521975 | orchestrator | Friday 13 March 2026 00:44:29 +0000 (0:00:00.243) 0:00:55.070 ********** 2026-03-13 00:44:34.521979 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '119e494c-61db-56d2-84c4-ae65d8356f6a'}}) 2026-03-13 00:44:34.521983 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5854fe4a-6d96-56a2-8017-73d7ac8736b8'}}) 2026-03-13 00:44:34.521987 | orchestrator | 2026-03-13 00:44:34.521995 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-13 00:44:34.521999 | orchestrator | Friday 13 March 2026 00:44:29 +0000 (0:00:00.177) 0:00:55.248 ********** 2026-03-13 00:44:34.522003 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-119e494c-61db-56d2-84c4-ae65d8356f6a', 'data_vg': 'ceph-119e494c-61db-56d2-84c4-ae65d8356f6a'}) 2026-03-13 00:44:34.522008 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-5854fe4a-6d96-56a2-8017-73d7ac8736b8', 'data_vg': 'ceph-5854fe4a-6d96-56a2-8017-73d7ac8736b8'}) 2026-03-13 00:44:34.522042 | orchestrator | 2026-03-13 00:44:34.522047 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-13 00:44:34.522059 | orchestrator | Friday 13 March 2026 00:44:31 +0000 (0:00:01.834) 0:00:57.082 ********** 2026-03-13 00:44:34.522064 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-119e494c-61db-56d2-84c4-ae65d8356f6a', 'data_vg': 'ceph-119e494c-61db-56d2-84c4-ae65d8356f6a'})  2026-03-13 00:44:34.522069 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5854fe4a-6d96-56a2-8017-73d7ac8736b8', 'data_vg': 'ceph-5854fe4a-6d96-56a2-8017-73d7ac8736b8'})  2026-03-13 00:44:34.522072 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:44:34.522076 | orchestrator | 2026-03-13 00:44:34.522080 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-13 00:44:34.522084 | orchestrator | Friday 13 March 2026 00:44:31 +0000 (0:00:00.151) 0:00:57.234 ********** 2026-03-13 00:44:34.522087 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-119e494c-61db-56d2-84c4-ae65d8356f6a', 'data_vg': 'ceph-119e494c-61db-56d2-84c4-ae65d8356f6a'}) 2026-03-13 00:44:34.522091 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-5854fe4a-6d96-56a2-8017-73d7ac8736b8', 'data_vg': 'ceph-5854fe4a-6d96-56a2-8017-73d7ac8736b8'}) 2026-03-13 00:44:34.522095 | orchestrator | 2026-03-13 00:44:34.522098 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-13 00:44:34.522102 | orchestrator | Friday 13 March 2026 00:44:33 +0000 (0:00:01.287) 0:00:58.521 ********** 2026-03-13 00:44:34.522106 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-119e494c-61db-56d2-84c4-ae65d8356f6a', 'data_vg': 'ceph-119e494c-61db-56d2-84c4-ae65d8356f6a'})  2026-03-13 00:44:34.522110 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5854fe4a-6d96-56a2-8017-73d7ac8736b8', 'data_vg': 'ceph-5854fe4a-6d96-56a2-8017-73d7ac8736b8'})  2026-03-13 00:44:34.522116 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:44:34.522120 | orchestrator | 2026-03-13 00:44:34.522123 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-13 00:44:34.522127 | orchestrator | Friday 13 March 2026 00:44:33 +0000 (0:00:00.137) 0:00:58.659 ********** 2026-03-13 00:44:34.522131 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:44:34.522134 | orchestrator | 2026-03-13 00:44:34.522138 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-13 00:44:34.522158 | orchestrator | Friday 13 March 2026 00:44:33 +0000 (0:00:00.156) 0:00:58.816 ********** 2026-03-13 00:44:34.522170 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-119e494c-61db-56d2-84c4-ae65d8356f6a', 'data_vg': 'ceph-119e494c-61db-56d2-84c4-ae65d8356f6a'})  2026-03-13 00:44:34.522177 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5854fe4a-6d96-56a2-8017-73d7ac8736b8', 'data_vg': 'ceph-5854fe4a-6d96-56a2-8017-73d7ac8736b8'})  2026-03-13 00:44:34.522182 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:44:34.522188 | orchestrator | 2026-03-13 00:44:34.522193 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-13 00:44:34.522199 | orchestrator | Friday 13 March 2026 00:44:33 +0000 (0:00:00.164) 0:00:58.980 ********** 2026-03-13 00:44:34.522205 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:44:34.522212 | orchestrator | 2026-03-13 00:44:34.522219 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-13 00:44:34.522226 | orchestrator | Friday 13 March 2026 00:44:33 +0000 (0:00:00.128) 0:00:59.108 ********** 2026-03-13 00:44:34.522233 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-119e494c-61db-56d2-84c4-ae65d8356f6a', 'data_vg': 'ceph-119e494c-61db-56d2-84c4-ae65d8356f6a'})  2026-03-13 00:44:34.522240 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5854fe4a-6d96-56a2-8017-73d7ac8736b8', 'data_vg': 'ceph-5854fe4a-6d96-56a2-8017-73d7ac8736b8'})  2026-03-13 00:44:34.522245 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:44:34.522249 | orchestrator | 2026-03-13 00:44:34.522253 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-13 00:44:34.522258 | orchestrator | Friday 13 March 2026 00:44:33 +0000 (0:00:00.132) 0:00:59.241 ********** 2026-03-13 00:44:34.522262 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:44:34.522266 | orchestrator | 2026-03-13 00:44:34.522270 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-13 00:44:34.522275 | orchestrator | Friday 13 March 2026 00:44:33 +0000 (0:00:00.155) 0:00:59.397 ********** 2026-03-13 00:44:34.522279 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-119e494c-61db-56d2-84c4-ae65d8356f6a', 'data_vg': 'ceph-119e494c-61db-56d2-84c4-ae65d8356f6a'})  2026-03-13 00:44:34.522283 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5854fe4a-6d96-56a2-8017-73d7ac8736b8', 'data_vg': 'ceph-5854fe4a-6d96-56a2-8017-73d7ac8736b8'})  2026-03-13 00:44:34.522288 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:44:34.522292 | orchestrator | 2026-03-13 00:44:34.522296 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-13 00:44:34.522300 | orchestrator | Friday 13 March 2026 00:44:34 +0000 (0:00:00.158) 0:00:59.555 ********** 2026-03-13 00:44:34.522304 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:44:34.522309 | orchestrator | 2026-03-13 00:44:34.522313 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-13 00:44:34.522317 | orchestrator | Friday 13 March 2026 00:44:34 +0000 (0:00:00.307) 0:00:59.863 ********** 2026-03-13 00:44:34.522326 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-119e494c-61db-56d2-84c4-ae65d8356f6a', 'data_vg': 'ceph-119e494c-61db-56d2-84c4-ae65d8356f6a'})  2026-03-13 00:44:40.267696 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5854fe4a-6d96-56a2-8017-73d7ac8736b8', 'data_vg': 'ceph-5854fe4a-6d96-56a2-8017-73d7ac8736b8'})  2026-03-13 00:44:40.267787 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:44:40.267797 | orchestrator | 2026-03-13 00:44:40.267804 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-13 00:44:40.267813 | orchestrator | Friday 13 March 2026 00:44:34 +0000 (0:00:00.196) 0:01:00.059 ********** 2026-03-13 00:44:40.267820 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-119e494c-61db-56d2-84c4-ae65d8356f6a', 'data_vg': 'ceph-119e494c-61db-56d2-84c4-ae65d8356f6a'})  2026-03-13 00:44:40.267827 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5854fe4a-6d96-56a2-8017-73d7ac8736b8', 'data_vg': 'ceph-5854fe4a-6d96-56a2-8017-73d7ac8736b8'})  2026-03-13 00:44:40.267853 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:44:40.267859 | orchestrator | 2026-03-13 00:44:40.267866 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-13 00:44:40.267873 | orchestrator | Friday 13 March 2026 00:44:34 +0000 (0:00:00.154) 0:01:00.214 ********** 2026-03-13 00:44:40.267880 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-119e494c-61db-56d2-84c4-ae65d8356f6a', 'data_vg': 'ceph-119e494c-61db-56d2-84c4-ae65d8356f6a'})  2026-03-13 00:44:40.267885 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5854fe4a-6d96-56a2-8017-73d7ac8736b8', 'data_vg': 'ceph-5854fe4a-6d96-56a2-8017-73d7ac8736b8'})  2026-03-13 00:44:40.267892 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:44:40.267898 | orchestrator | 2026-03-13 00:44:40.267904 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-13 00:44:40.267924 | orchestrator | Friday 13 March 2026 00:44:34 +0000 (0:00:00.146) 0:01:00.360 ********** 2026-03-13 00:44:40.267930 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:44:40.267936 | orchestrator | 2026-03-13 00:44:40.267942 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-13 00:44:40.267948 | orchestrator | Friday 13 March 2026 00:44:35 +0000 (0:00:00.147) 0:01:00.507 ********** 2026-03-13 00:44:40.267954 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:44:40.267959 | orchestrator | 2026-03-13 00:44:40.267965 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-13 00:44:40.267970 | orchestrator | Friday 13 March 2026 00:44:35 +0000 (0:00:00.109) 0:01:00.617 ********** 2026-03-13 00:44:40.267976 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:44:40.267981 | orchestrator | 2026-03-13 00:44:40.267987 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-13 00:44:40.267993 | orchestrator | Friday 13 March 2026 00:44:35 +0000 (0:00:00.124) 0:01:00.741 ********** 2026-03-13 00:44:40.267999 | orchestrator | ok: [testbed-node-5] => { 2026-03-13 00:44:40.268006 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-13 00:44:40.268012 | orchestrator | } 2026-03-13 00:44:40.268019 | orchestrator | 2026-03-13 00:44:40.268025 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-13 00:44:40.268031 | orchestrator | Friday 13 March 2026 00:44:35 +0000 (0:00:00.124) 0:01:00.865 ********** 2026-03-13 00:44:40.268037 | orchestrator | ok: [testbed-node-5] => { 2026-03-13 00:44:40.268043 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-13 00:44:40.268049 | orchestrator | } 2026-03-13 00:44:40.268056 | orchestrator | 2026-03-13 00:44:40.268062 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-13 00:44:40.268068 | orchestrator | Friday 13 March 2026 00:44:35 +0000 (0:00:00.122) 0:01:00.988 ********** 2026-03-13 00:44:40.268074 | orchestrator | ok: [testbed-node-5] => { 2026-03-13 00:44:40.268081 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-13 00:44:40.268087 | orchestrator | } 2026-03-13 00:44:40.268093 | orchestrator | 2026-03-13 00:44:40.268099 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-13 00:44:40.268106 | orchestrator | Friday 13 March 2026 00:44:35 +0000 (0:00:00.135) 0:01:01.123 ********** 2026-03-13 00:44:40.268112 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:44:40.268119 | orchestrator | 2026-03-13 00:44:40.268125 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-13 00:44:40.268132 | orchestrator | Friday 13 March 2026 00:44:36 +0000 (0:00:00.492) 0:01:01.615 ********** 2026-03-13 00:44:40.268138 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:44:40.268144 | orchestrator | 2026-03-13 00:44:40.268222 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-13 00:44:40.268229 | orchestrator | Friday 13 March 2026 00:44:36 +0000 (0:00:00.532) 0:01:02.148 ********** 2026-03-13 00:44:40.268236 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:44:40.268249 | orchestrator | 2026-03-13 00:44:40.268255 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-13 00:44:40.268262 | orchestrator | Friday 13 March 2026 00:44:37 +0000 (0:00:00.685) 0:01:02.833 ********** 2026-03-13 00:44:40.268269 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:44:40.268275 | orchestrator | 2026-03-13 00:44:40.268282 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-13 00:44:40.268289 | orchestrator | Friday 13 March 2026 00:44:37 +0000 (0:00:00.142) 0:01:02.976 ********** 2026-03-13 00:44:40.268295 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:44:40.268302 | orchestrator | 2026-03-13 00:44:40.268308 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-13 00:44:40.268315 | orchestrator | Friday 13 March 2026 00:44:37 +0000 (0:00:00.100) 0:01:03.076 ********** 2026-03-13 00:44:40.268321 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:44:40.268327 | orchestrator | 2026-03-13 00:44:40.268334 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-13 00:44:40.268340 | orchestrator | Friday 13 March 2026 00:44:37 +0000 (0:00:00.104) 0:01:03.181 ********** 2026-03-13 00:44:40.268351 | orchestrator | ok: [testbed-node-5] => { 2026-03-13 00:44:40.268358 | orchestrator |  "vgs_report": { 2026-03-13 00:44:40.268364 | orchestrator |  "vg": [] 2026-03-13 00:44:40.268386 | orchestrator |  } 2026-03-13 00:44:40.268392 | orchestrator | } 2026-03-13 00:44:40.268397 | orchestrator | 2026-03-13 00:44:40.268402 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-13 00:44:40.268407 | orchestrator | Friday 13 March 2026 00:44:37 +0000 (0:00:00.136) 0:01:03.317 ********** 2026-03-13 00:44:40.268413 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:44:40.268419 | orchestrator | 2026-03-13 00:44:40.268426 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-13 00:44:40.268432 | orchestrator | Friday 13 March 2026 00:44:38 +0000 (0:00:00.135) 0:01:03.453 ********** 2026-03-13 00:44:40.268438 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:44:40.268444 | orchestrator | 2026-03-13 00:44:40.268450 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-13 00:44:40.268457 | orchestrator | Friday 13 March 2026 00:44:38 +0000 (0:00:00.133) 0:01:03.587 ********** 2026-03-13 00:44:40.268463 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:44:40.268470 | orchestrator | 2026-03-13 00:44:40.268477 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-13 00:44:40.268483 | orchestrator | Friday 13 March 2026 00:44:38 +0000 (0:00:00.127) 0:01:03.715 ********** 2026-03-13 00:44:40.268489 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:44:40.268493 | orchestrator | 2026-03-13 00:44:40.268497 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-13 00:44:40.268502 | orchestrator | Friday 13 March 2026 00:44:38 +0000 (0:00:00.115) 0:01:03.830 ********** 2026-03-13 00:44:40.268506 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:44:40.268510 | orchestrator | 2026-03-13 00:44:40.268515 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-13 00:44:40.268519 | orchestrator | Friday 13 March 2026 00:44:38 +0000 (0:00:00.132) 0:01:03.963 ********** 2026-03-13 00:44:40.268523 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:44:40.268527 | orchestrator | 2026-03-13 00:44:40.268532 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-13 00:44:40.268536 | orchestrator | Friday 13 March 2026 00:44:38 +0000 (0:00:00.136) 0:01:04.099 ********** 2026-03-13 00:44:40.268541 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:44:40.268545 | orchestrator | 2026-03-13 00:44:40.268550 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-13 00:44:40.268554 | orchestrator | Friday 13 March 2026 00:44:38 +0000 (0:00:00.138) 0:01:04.237 ********** 2026-03-13 00:44:40.268558 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:44:40.268563 | orchestrator | 2026-03-13 00:44:40.268567 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-13 00:44:40.268576 | orchestrator | Friday 13 March 2026 00:44:39 +0000 (0:00:00.274) 0:01:04.511 ********** 2026-03-13 00:44:40.268581 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:44:40.268585 | orchestrator | 2026-03-13 00:44:40.268590 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-13 00:44:40.268594 | orchestrator | Friday 13 March 2026 00:44:39 +0000 (0:00:00.129) 0:01:04.641 ********** 2026-03-13 00:44:40.268599 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:44:40.268603 | orchestrator | 2026-03-13 00:44:40.268607 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-13 00:44:40.268612 | orchestrator | Friday 13 March 2026 00:44:39 +0000 (0:00:00.122) 0:01:04.763 ********** 2026-03-13 00:44:40.268616 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:44:40.268621 | orchestrator | 2026-03-13 00:44:40.268625 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-13 00:44:40.268629 | orchestrator | Friday 13 March 2026 00:44:39 +0000 (0:00:00.159) 0:01:04.922 ********** 2026-03-13 00:44:40.268634 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:44:40.268638 | orchestrator | 2026-03-13 00:44:40.268642 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-13 00:44:40.268646 | orchestrator | Friday 13 March 2026 00:44:39 +0000 (0:00:00.128) 0:01:05.051 ********** 2026-03-13 00:44:40.268650 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:44:40.268654 | orchestrator | 2026-03-13 00:44:40.268658 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-13 00:44:40.268661 | orchestrator | Friday 13 March 2026 00:44:39 +0000 (0:00:00.123) 0:01:05.174 ********** 2026-03-13 00:44:40.268665 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:44:40.268669 | orchestrator | 2026-03-13 00:44:40.268672 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-13 00:44:40.268676 | orchestrator | Friday 13 March 2026 00:44:39 +0000 (0:00:00.130) 0:01:05.305 ********** 2026-03-13 00:44:40.268680 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-119e494c-61db-56d2-84c4-ae65d8356f6a', 'data_vg': 'ceph-119e494c-61db-56d2-84c4-ae65d8356f6a'})  2026-03-13 00:44:40.268684 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5854fe4a-6d96-56a2-8017-73d7ac8736b8', 'data_vg': 'ceph-5854fe4a-6d96-56a2-8017-73d7ac8736b8'})  2026-03-13 00:44:40.268688 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:44:40.268692 | orchestrator | 2026-03-13 00:44:40.268695 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-13 00:44:40.268699 | orchestrator | Friday 13 March 2026 00:44:40 +0000 (0:00:00.148) 0:01:05.454 ********** 2026-03-13 00:44:40.268703 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-119e494c-61db-56d2-84c4-ae65d8356f6a', 'data_vg': 'ceph-119e494c-61db-56d2-84c4-ae65d8356f6a'})  2026-03-13 00:44:40.268707 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5854fe4a-6d96-56a2-8017-73d7ac8736b8', 'data_vg': 'ceph-5854fe4a-6d96-56a2-8017-73d7ac8736b8'})  2026-03-13 00:44:40.268711 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:44:40.268715 | orchestrator | 2026-03-13 00:44:40.268718 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-13 00:44:40.268722 | orchestrator | Friday 13 March 2026 00:44:40 +0000 (0:00:00.175) 0:01:05.629 ********** 2026-03-13 00:44:40.268731 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-119e494c-61db-56d2-84c4-ae65d8356f6a', 'data_vg': 'ceph-119e494c-61db-56d2-84c4-ae65d8356f6a'})  2026-03-13 00:44:43.384695 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5854fe4a-6d96-56a2-8017-73d7ac8736b8', 'data_vg': 'ceph-5854fe4a-6d96-56a2-8017-73d7ac8736b8'})  2026-03-13 00:44:43.384759 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:44:43.384765 | orchestrator | 2026-03-13 00:44:43.384770 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-13 00:44:43.384775 | orchestrator | Friday 13 March 2026 00:44:40 +0000 (0:00:00.144) 0:01:05.773 ********** 2026-03-13 00:44:43.384798 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-119e494c-61db-56d2-84c4-ae65d8356f6a', 'data_vg': 'ceph-119e494c-61db-56d2-84c4-ae65d8356f6a'})  2026-03-13 00:44:43.385314 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5854fe4a-6d96-56a2-8017-73d7ac8736b8', 'data_vg': 'ceph-5854fe4a-6d96-56a2-8017-73d7ac8736b8'})  2026-03-13 00:44:43.385334 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:44:43.385344 | orchestrator | 2026-03-13 00:44:43.385352 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-13 00:44:43.385362 | orchestrator | Friday 13 March 2026 00:44:40 +0000 (0:00:00.144) 0:01:05.918 ********** 2026-03-13 00:44:43.385375 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-119e494c-61db-56d2-84c4-ae65d8356f6a', 'data_vg': 'ceph-119e494c-61db-56d2-84c4-ae65d8356f6a'})  2026-03-13 00:44:43.385392 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5854fe4a-6d96-56a2-8017-73d7ac8736b8', 'data_vg': 'ceph-5854fe4a-6d96-56a2-8017-73d7ac8736b8'})  2026-03-13 00:44:43.385404 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:44:43.385415 | orchestrator | 2026-03-13 00:44:43.385422 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-13 00:44:43.385430 | orchestrator | Friday 13 March 2026 00:44:40 +0000 (0:00:00.146) 0:01:06.065 ********** 2026-03-13 00:44:43.385437 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-119e494c-61db-56d2-84c4-ae65d8356f6a', 'data_vg': 'ceph-119e494c-61db-56d2-84c4-ae65d8356f6a'})  2026-03-13 00:44:43.385444 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5854fe4a-6d96-56a2-8017-73d7ac8736b8', 'data_vg': 'ceph-5854fe4a-6d96-56a2-8017-73d7ac8736b8'})  2026-03-13 00:44:43.385451 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:44:43.385457 | orchestrator | 2026-03-13 00:44:43.385464 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-13 00:44:43.385470 | orchestrator | Friday 13 March 2026 00:44:40 +0000 (0:00:00.279) 0:01:06.344 ********** 2026-03-13 00:44:43.385477 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-119e494c-61db-56d2-84c4-ae65d8356f6a', 'data_vg': 'ceph-119e494c-61db-56d2-84c4-ae65d8356f6a'})  2026-03-13 00:44:43.385485 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5854fe4a-6d96-56a2-8017-73d7ac8736b8', 'data_vg': 'ceph-5854fe4a-6d96-56a2-8017-73d7ac8736b8'})  2026-03-13 00:44:43.385492 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:44:43.385498 | orchestrator | 2026-03-13 00:44:43.385505 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-13 00:44:43.385512 | orchestrator | Friday 13 March 2026 00:44:41 +0000 (0:00:00.166) 0:01:06.511 ********** 2026-03-13 00:44:43.385519 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-119e494c-61db-56d2-84c4-ae65d8356f6a', 'data_vg': 'ceph-119e494c-61db-56d2-84c4-ae65d8356f6a'})  2026-03-13 00:44:43.385527 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5854fe4a-6d96-56a2-8017-73d7ac8736b8', 'data_vg': 'ceph-5854fe4a-6d96-56a2-8017-73d7ac8736b8'})  2026-03-13 00:44:43.385534 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:44:43.385541 | orchestrator | 2026-03-13 00:44:43.385548 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-13 00:44:43.385554 | orchestrator | Friday 13 March 2026 00:44:41 +0000 (0:00:00.140) 0:01:06.651 ********** 2026-03-13 00:44:43.385561 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:44:43.385568 | orchestrator | 2026-03-13 00:44:43.385575 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-13 00:44:43.385581 | orchestrator | Friday 13 March 2026 00:44:41 +0000 (0:00:00.599) 0:01:07.251 ********** 2026-03-13 00:44:43.385588 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:44:43.385594 | orchestrator | 2026-03-13 00:44:43.385601 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-13 00:44:43.385611 | orchestrator | Friday 13 March 2026 00:44:42 +0000 (0:00:00.591) 0:01:07.843 ********** 2026-03-13 00:44:43.385615 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:44:43.385618 | orchestrator | 2026-03-13 00:44:43.385622 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-13 00:44:43.385626 | orchestrator | Friday 13 March 2026 00:44:42 +0000 (0:00:00.141) 0:01:07.984 ********** 2026-03-13 00:44:43.385630 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-119e494c-61db-56d2-84c4-ae65d8356f6a', 'vg_name': 'ceph-119e494c-61db-56d2-84c4-ae65d8356f6a'}) 2026-03-13 00:44:43.385634 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-5854fe4a-6d96-56a2-8017-73d7ac8736b8', 'vg_name': 'ceph-5854fe4a-6d96-56a2-8017-73d7ac8736b8'}) 2026-03-13 00:44:43.385638 | orchestrator | 2026-03-13 00:44:43.385642 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-13 00:44:43.385645 | orchestrator | Friday 13 March 2026 00:44:42 +0000 (0:00:00.185) 0:01:08.170 ********** 2026-03-13 00:44:43.385660 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-119e494c-61db-56d2-84c4-ae65d8356f6a', 'data_vg': 'ceph-119e494c-61db-56d2-84c4-ae65d8356f6a'})  2026-03-13 00:44:43.385664 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5854fe4a-6d96-56a2-8017-73d7ac8736b8', 'data_vg': 'ceph-5854fe4a-6d96-56a2-8017-73d7ac8736b8'})  2026-03-13 00:44:43.385668 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:44:43.385671 | orchestrator | 2026-03-13 00:44:43.385675 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-13 00:44:43.385679 | orchestrator | Friday 13 March 2026 00:44:42 +0000 (0:00:00.158) 0:01:08.328 ********** 2026-03-13 00:44:43.385683 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-119e494c-61db-56d2-84c4-ae65d8356f6a', 'data_vg': 'ceph-119e494c-61db-56d2-84c4-ae65d8356f6a'})  2026-03-13 00:44:43.385686 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5854fe4a-6d96-56a2-8017-73d7ac8736b8', 'data_vg': 'ceph-5854fe4a-6d96-56a2-8017-73d7ac8736b8'})  2026-03-13 00:44:43.385690 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:44:43.385694 | orchestrator | 2026-03-13 00:44:43.385698 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-13 00:44:43.385701 | orchestrator | Friday 13 March 2026 00:44:43 +0000 (0:00:00.164) 0:01:08.493 ********** 2026-03-13 00:44:43.385705 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-119e494c-61db-56d2-84c4-ae65d8356f6a', 'data_vg': 'ceph-119e494c-61db-56d2-84c4-ae65d8356f6a'})  2026-03-13 00:44:43.385712 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5854fe4a-6d96-56a2-8017-73d7ac8736b8', 'data_vg': 'ceph-5854fe4a-6d96-56a2-8017-73d7ac8736b8'})  2026-03-13 00:44:43.385716 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:44:43.385719 | orchestrator | 2026-03-13 00:44:43.385723 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-13 00:44:43.385727 | orchestrator | Friday 13 March 2026 00:44:43 +0000 (0:00:00.159) 0:01:08.653 ********** 2026-03-13 00:44:43.385731 | orchestrator | ok: [testbed-node-5] => { 2026-03-13 00:44:43.385734 | orchestrator |  "lvm_report": { 2026-03-13 00:44:43.385738 | orchestrator |  "lv": [ 2026-03-13 00:44:43.385742 | orchestrator |  { 2026-03-13 00:44:43.385746 | orchestrator |  "lv_name": "osd-block-119e494c-61db-56d2-84c4-ae65d8356f6a", 2026-03-13 00:44:43.385750 | orchestrator |  "vg_name": "ceph-119e494c-61db-56d2-84c4-ae65d8356f6a" 2026-03-13 00:44:43.385754 | orchestrator |  }, 2026-03-13 00:44:43.385757 | orchestrator |  { 2026-03-13 00:44:43.385761 | orchestrator |  "lv_name": "osd-block-5854fe4a-6d96-56a2-8017-73d7ac8736b8", 2026-03-13 00:44:43.385765 | orchestrator |  "vg_name": "ceph-5854fe4a-6d96-56a2-8017-73d7ac8736b8" 2026-03-13 00:44:43.385769 | orchestrator |  } 2026-03-13 00:44:43.385772 | orchestrator |  ], 2026-03-13 00:44:43.385776 | orchestrator |  "pv": [ 2026-03-13 00:44:43.385783 | orchestrator |  { 2026-03-13 00:44:43.385786 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-13 00:44:43.385790 | orchestrator |  "vg_name": "ceph-119e494c-61db-56d2-84c4-ae65d8356f6a" 2026-03-13 00:44:43.385794 | orchestrator |  }, 2026-03-13 00:44:43.385798 | orchestrator |  { 2026-03-13 00:44:43.385801 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-13 00:44:43.385805 | orchestrator |  "vg_name": "ceph-5854fe4a-6d96-56a2-8017-73d7ac8736b8" 2026-03-13 00:44:43.385809 | orchestrator |  } 2026-03-13 00:44:43.385813 | orchestrator |  ] 2026-03-13 00:44:43.385816 | orchestrator |  } 2026-03-13 00:44:43.385820 | orchestrator | } 2026-03-13 00:44:43.385824 | orchestrator | 2026-03-13 00:44:43.385828 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 00:44:43.385831 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-13 00:44:43.385835 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-13 00:44:43.385839 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-13 00:44:43.385843 | orchestrator | 2026-03-13 00:44:43.385847 | orchestrator | 2026-03-13 00:44:43.385850 | orchestrator | 2026-03-13 00:44:43.385854 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 00:44:43.385858 | orchestrator | Friday 13 March 2026 00:44:43 +0000 (0:00:00.145) 0:01:08.798 ********** 2026-03-13 00:44:43.385862 | orchestrator | =============================================================================== 2026-03-13 00:44:43.385866 | orchestrator | Create block VGs -------------------------------------------------------- 5.69s 2026-03-13 00:44:43.385869 | orchestrator | Create block LVs -------------------------------------------------------- 4.25s 2026-03-13 00:44:43.385873 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.79s 2026-03-13 00:44:43.385877 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.70s 2026-03-13 00:44:43.385881 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.66s 2026-03-13 00:44:43.385884 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.60s 2026-03-13 00:44:43.385888 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.55s 2026-03-13 00:44:43.385892 | orchestrator | Add known partitions to the list of available block devices ------------- 1.44s 2026-03-13 00:44:43.385898 | orchestrator | Add known links to the list of available block devices ------------------ 1.20s 2026-03-13 00:44:43.688211 | orchestrator | Add known partitions to the list of available block devices ------------- 0.95s 2026-03-13 00:44:43.688267 | orchestrator | Add known links to the list of available block devices ------------------ 0.83s 2026-03-13 00:44:43.688273 | orchestrator | Print LVM report data --------------------------------------------------- 0.79s 2026-03-13 00:44:43.688277 | orchestrator | Add known partitions to the list of available block devices ------------- 0.78s 2026-03-13 00:44:43.688281 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.71s 2026-03-13 00:44:43.688285 | orchestrator | Print 'Create WAL VGs' -------------------------------------------------- 0.69s 2026-03-13 00:44:43.688289 | orchestrator | Print number of OSDs wanted per DB+WAL VG ------------------------------- 0.65s 2026-03-13 00:44:43.688293 | orchestrator | Get initial list of available block devices ----------------------------- 0.63s 2026-03-13 00:44:43.688297 | orchestrator | Create WAL LVs for ceph_wal_devices ------------------------------------- 0.62s 2026-03-13 00:44:43.688300 | orchestrator | Prepare variables for OSD count check ----------------------------------- 0.62s 2026-03-13 00:44:43.688304 | orchestrator | Add known partitions to the list of available block devices ------------- 0.61s 2026-03-13 00:44:55.810458 | orchestrator | 2026-03-13 00:44:55 | INFO  | Prepare task for execution of facts. 2026-03-13 00:44:55.880577 | orchestrator | 2026-03-13 00:44:55 | INFO  | Task 413e33fc-eab6-4d64-8738-9a09d552c2f1 (facts) was prepared for execution. 2026-03-13 00:44:55.880668 | orchestrator | 2026-03-13 00:44:55 | INFO  | It takes a moment until task 413e33fc-eab6-4d64-8738-9a09d552c2f1 (facts) has been started and output is visible here. 2026-03-13 00:45:08.318893 | orchestrator | 2026-03-13 00:45:08.318954 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-13 00:45:08.318961 | orchestrator | 2026-03-13 00:45:08.318965 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-13 00:45:08.318968 | orchestrator | Friday 13 March 2026 00:44:59 +0000 (0:00:00.278) 0:00:00.278 ********** 2026-03-13 00:45:08.318972 | orchestrator | ok: [testbed-manager] 2026-03-13 00:45:08.318975 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:45:08.318978 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:45:08.318982 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:45:08.318985 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:45:08.318988 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:45:08.318991 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:45:08.318994 | orchestrator | 2026-03-13 00:45:08.318997 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-13 00:45:08.319001 | orchestrator | Friday 13 March 2026 00:45:01 +0000 (0:00:01.092) 0:00:01.371 ********** 2026-03-13 00:45:08.319004 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:45:08.319007 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:45:08.319010 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:45:08.319014 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:45:08.319017 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:45:08.319020 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:45:08.319023 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:45:08.319026 | orchestrator | 2026-03-13 00:45:08.319029 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-13 00:45:08.319032 | orchestrator | 2026-03-13 00:45:08.319035 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-13 00:45:08.319038 | orchestrator | Friday 13 March 2026 00:45:02 +0000 (0:00:01.280) 0:00:02.651 ********** 2026-03-13 00:45:08.319041 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:45:08.319044 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:45:08.319047 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:45:08.319050 | orchestrator | ok: [testbed-manager] 2026-03-13 00:45:08.319053 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:45:08.319056 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:45:08.319059 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:45:08.319062 | orchestrator | 2026-03-13 00:45:08.319066 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-13 00:45:08.319069 | orchestrator | 2026-03-13 00:45:08.319072 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-13 00:45:08.319075 | orchestrator | Friday 13 March 2026 00:45:07 +0000 (0:00:05.123) 0:00:07.775 ********** 2026-03-13 00:45:08.319078 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:45:08.319081 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:45:08.319084 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:45:08.319087 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:45:08.319090 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:45:08.319093 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:45:08.319096 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:45:08.319099 | orchestrator | 2026-03-13 00:45:08.319102 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 00:45:08.319105 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-13 00:45:08.319109 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-13 00:45:08.319125 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-13 00:45:08.319128 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-13 00:45:08.319131 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-13 00:45:08.319135 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-13 00:45:08.319138 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-13 00:45:08.319141 | orchestrator | 2026-03-13 00:45:08.319144 | orchestrator | 2026-03-13 00:45:08.319147 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 00:45:08.319150 | orchestrator | Friday 13 March 2026 00:45:07 +0000 (0:00:00.505) 0:00:08.280 ********** 2026-03-13 00:45:08.319153 | orchestrator | =============================================================================== 2026-03-13 00:45:08.319156 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.12s 2026-03-13 00:45:08.319159 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.28s 2026-03-13 00:45:08.319162 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.09s 2026-03-13 00:45:08.319165 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.51s 2026-03-13 00:45:20.844407 | orchestrator | 2026-03-13 00:45:20 | INFO  | Prepare task for execution of frr. 2026-03-13 00:45:20.910074 | orchestrator | 2026-03-13 00:45:20 | INFO  | Task 0e916f9f-6d8e-4e54-8ef0-d97e76a40c3c (frr) was prepared for execution. 2026-03-13 00:45:20.910122 | orchestrator | 2026-03-13 00:45:20 | INFO  | It takes a moment until task 0e916f9f-6d8e-4e54-8ef0-d97e76a40c3c (frr) has been started and output is visible here. 2026-03-13 00:45:44.432175 | orchestrator | 2026-03-13 00:45:44.432294 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-03-13 00:45:44.432304 | orchestrator | 2026-03-13 00:45:44.432309 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-03-13 00:45:44.432314 | orchestrator | Friday 13 March 2026 00:45:24 +0000 (0:00:00.217) 0:00:00.217 ********** 2026-03-13 00:45:44.432319 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-03-13 00:45:44.432324 | orchestrator | 2026-03-13 00:45:44.432329 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-03-13 00:45:44.432333 | orchestrator | Friday 13 March 2026 00:45:24 +0000 (0:00:00.204) 0:00:00.421 ********** 2026-03-13 00:45:44.432338 | orchestrator | changed: [testbed-manager] 2026-03-13 00:45:44.432343 | orchestrator | 2026-03-13 00:45:44.432347 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-03-13 00:45:44.432352 | orchestrator | Friday 13 March 2026 00:45:25 +0000 (0:00:01.026) 0:00:01.448 ********** 2026-03-13 00:45:44.432356 | orchestrator | changed: [testbed-manager] 2026-03-13 00:45:44.432360 | orchestrator | 2026-03-13 00:45:44.432365 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-03-13 00:45:44.432369 | orchestrator | Friday 13 March 2026 00:45:34 +0000 (0:00:08.445) 0:00:09.893 ********** 2026-03-13 00:45:44.432373 | orchestrator | ok: [testbed-manager] 2026-03-13 00:45:44.432378 | orchestrator | 2026-03-13 00:45:44.432383 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-03-13 00:45:44.432387 | orchestrator | Friday 13 March 2026 00:45:35 +0000 (0:00:00.962) 0:00:10.855 ********** 2026-03-13 00:45:44.432391 | orchestrator | changed: [testbed-manager] 2026-03-13 00:45:44.432411 | orchestrator | 2026-03-13 00:45:44.432416 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-03-13 00:45:44.432420 | orchestrator | Friday 13 March 2026 00:45:35 +0000 (0:00:00.956) 0:00:11.812 ********** 2026-03-13 00:45:44.432425 | orchestrator | ok: [testbed-manager] 2026-03-13 00:45:44.432429 | orchestrator | 2026-03-13 00:45:44.432434 | orchestrator | TASK [osism.services.frr : Write frr_config_template to temporary file] ******** 2026-03-13 00:45:44.432438 | orchestrator | Friday 13 March 2026 00:45:37 +0000 (0:00:01.037) 0:00:12.850 ********** 2026-03-13 00:45:44.432442 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:45:44.432447 | orchestrator | 2026-03-13 00:45:44.432451 | orchestrator | TASK [osism.services.frr : Render frr.conf from frr_config_template variable] *** 2026-03-13 00:45:44.432455 | orchestrator | Friday 13 March 2026 00:45:37 +0000 (0:00:00.146) 0:00:12.997 ********** 2026-03-13 00:45:44.432459 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:45:44.432464 | orchestrator | 2026-03-13 00:45:44.432468 | orchestrator | TASK [osism.services.frr : Remove temporary frr_config_template file] ********** 2026-03-13 00:45:44.432472 | orchestrator | Friday 13 March 2026 00:45:37 +0000 (0:00:00.143) 0:00:13.141 ********** 2026-03-13 00:45:44.432477 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:45:44.432481 | orchestrator | 2026-03-13 00:45:44.432485 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-03-13 00:45:44.432490 | orchestrator | Friday 13 March 2026 00:45:37 +0000 (0:00:00.148) 0:00:13.290 ********** 2026-03-13 00:45:44.432494 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:45:44.432499 | orchestrator | 2026-03-13 00:45:44.432503 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-03-13 00:45:44.432507 | orchestrator | Friday 13 March 2026 00:45:37 +0000 (0:00:00.141) 0:00:13.431 ********** 2026-03-13 00:45:44.432512 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:45:44.432516 | orchestrator | 2026-03-13 00:45:44.432520 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-03-13 00:45:44.432524 | orchestrator | Friday 13 March 2026 00:45:37 +0000 (0:00:00.147) 0:00:13.579 ********** 2026-03-13 00:45:44.432529 | orchestrator | changed: [testbed-manager] 2026-03-13 00:45:44.432533 | orchestrator | 2026-03-13 00:45:44.432537 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-03-13 00:45:44.432542 | orchestrator | Friday 13 March 2026 00:45:38 +0000 (0:00:01.017) 0:00:14.596 ********** 2026-03-13 00:45:44.432546 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-03-13 00:45:44.432550 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-03-13 00:45:44.432555 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-03-13 00:45:44.432560 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-03-13 00:45:44.432564 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-03-13 00:45:44.432568 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-03-13 00:45:44.432573 | orchestrator | 2026-03-13 00:45:44.432577 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-03-13 00:45:44.432581 | orchestrator | Friday 13 March 2026 00:45:41 +0000 (0:00:03.057) 0:00:17.654 ********** 2026-03-13 00:45:44.432586 | orchestrator | ok: [testbed-manager] 2026-03-13 00:45:44.432590 | orchestrator | 2026-03-13 00:45:44.432594 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-03-13 00:45:44.432598 | orchestrator | Friday 13 March 2026 00:45:42 +0000 (0:00:01.141) 0:00:18.796 ********** 2026-03-13 00:45:44.432603 | orchestrator | changed: [testbed-manager] 2026-03-13 00:45:44.432607 | orchestrator | 2026-03-13 00:45:44.432611 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 00:45:44.432620 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-13 00:45:44.432624 | orchestrator | 2026-03-13 00:45:44.432628 | orchestrator | 2026-03-13 00:45:44.432643 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 00:45:44.432648 | orchestrator | Friday 13 March 2026 00:45:44 +0000 (0:00:01.279) 0:00:20.076 ********** 2026-03-13 00:45:44.432652 | orchestrator | =============================================================================== 2026-03-13 00:45:44.432657 | orchestrator | osism.services.frr : Install frr package -------------------------------- 8.45s 2026-03-13 00:45:44.432661 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 3.06s 2026-03-13 00:45:44.432666 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.28s 2026-03-13 00:45:44.432670 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.14s 2026-03-13 00:45:44.432675 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.04s 2026-03-13 00:45:44.432679 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.03s 2026-03-13 00:45:44.432683 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.02s 2026-03-13 00:45:44.432688 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 0.96s 2026-03-13 00:45:44.432692 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.96s 2026-03-13 00:45:44.432696 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.20s 2026-03-13 00:45:44.432700 | orchestrator | osism.services.frr : Remove temporary frr_config_template file ---------- 0.15s 2026-03-13 00:45:44.432705 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.15s 2026-03-13 00:45:44.432709 | orchestrator | osism.services.frr : Write frr_config_template to temporary file -------- 0.15s 2026-03-13 00:45:44.432713 | orchestrator | osism.services.frr : Render frr.conf from frr_config_template variable --- 0.14s 2026-03-13 00:45:44.432718 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.14s 2026-03-13 00:45:44.623367 | orchestrator | 2026-03-13 00:45:44.626542 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Fri Mar 13 00:45:44 UTC 2026 2026-03-13 00:45:44.626604 | orchestrator | 2026-03-13 00:45:46.392188 | orchestrator | 2026-03-13 00:45:46 | INFO  | Collection nutshell is prepared for execution 2026-03-13 00:45:46.392325 | orchestrator | 2026-03-13 00:45:46 | INFO  | A [0] - dotfiles 2026-03-13 00:45:56.410760 | orchestrator | 2026-03-13 00:45:56 | INFO  | A [0] - homer 2026-03-13 00:45:56.410838 | orchestrator | 2026-03-13 00:45:56 | INFO  | A [0] - netdata 2026-03-13 00:45:56.410845 | orchestrator | 2026-03-13 00:45:56 | INFO  | A [0] - openstackclient 2026-03-13 00:45:56.410988 | orchestrator | 2026-03-13 00:45:56 | INFO  | A [0] - phpmyadmin 2026-03-13 00:45:56.411371 | orchestrator | 2026-03-13 00:45:56 | INFO  | A [0] - common 2026-03-13 00:45:56.416096 | orchestrator | 2026-03-13 00:45:56 | INFO  | A [1] -- loadbalancer 2026-03-13 00:45:56.416540 | orchestrator | 2026-03-13 00:45:56 | INFO  | A [2] --- opensearch 2026-03-13 00:45:56.416568 | orchestrator | 2026-03-13 00:45:56 | INFO  | A [2] --- mariadb-ng 2026-03-13 00:45:56.416576 | orchestrator | 2026-03-13 00:45:56 | INFO  | A [3] ---- horizon 2026-03-13 00:45:56.416583 | orchestrator | 2026-03-13 00:45:56 | INFO  | A [3] ---- keystone 2026-03-13 00:45:56.417169 | orchestrator | 2026-03-13 00:45:56 | INFO  | A [4] ----- neutron 2026-03-13 00:45:56.417385 | orchestrator | 2026-03-13 00:45:56 | INFO  | A [5] ------ wait-for-nova 2026-03-13 00:45:56.417672 | orchestrator | 2026-03-13 00:45:56 | INFO  | A [6] ------- octavia 2026-03-13 00:45:56.420447 | orchestrator | 2026-03-13 00:45:56 | INFO  | A [4] ----- barbican 2026-03-13 00:45:56.420515 | orchestrator | 2026-03-13 00:45:56 | INFO  | A [4] ----- designate 2026-03-13 00:45:56.420523 | orchestrator | 2026-03-13 00:45:56 | INFO  | A [4] ----- ironic 2026-03-13 00:45:56.420529 | orchestrator | 2026-03-13 00:45:56 | INFO  | A [4] ----- placement 2026-03-13 00:45:56.420535 | orchestrator | 2026-03-13 00:45:56 | INFO  | A [4] ----- magnum 2026-03-13 00:45:56.421432 | orchestrator | 2026-03-13 00:45:56 | INFO  | A [1] -- openvswitch 2026-03-13 00:45:56.421460 | orchestrator | 2026-03-13 00:45:56 | INFO  | A [2] --- ovn 2026-03-13 00:45:56.421807 | orchestrator | 2026-03-13 00:45:56 | INFO  | A [1] -- memcached 2026-03-13 00:45:56.421965 | orchestrator | 2026-03-13 00:45:56 | INFO  | A [1] -- redis 2026-03-13 00:45:56.422847 | orchestrator | 2026-03-13 00:45:56 | INFO  | A [1] -- rabbitmq-ng 2026-03-13 00:45:56.422921 | orchestrator | 2026-03-13 00:45:56 | INFO  | A [0] - kubernetes 2026-03-13 00:45:56.426277 | orchestrator | 2026-03-13 00:45:56 | INFO  | A [1] -- kubeconfig 2026-03-13 00:45:56.426326 | orchestrator | 2026-03-13 00:45:56 | INFO  | A [1] -- copy-kubeconfig 2026-03-13 00:45:56.426338 | orchestrator | 2026-03-13 00:45:56 | INFO  | A [0] - ceph 2026-03-13 00:45:56.429045 | orchestrator | 2026-03-13 00:45:56 | INFO  | A [1] -- ceph-pools 2026-03-13 00:45:56.429086 | orchestrator | 2026-03-13 00:45:56 | INFO  | A [2] --- copy-ceph-keys 2026-03-13 00:45:56.429094 | orchestrator | 2026-03-13 00:45:56 | INFO  | A [3] ---- cephclient 2026-03-13 00:45:56.430352 | orchestrator | 2026-03-13 00:45:56 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-03-13 00:45:56.430388 | orchestrator | 2026-03-13 00:45:56 | INFO  | A [4] ----- wait-for-keystone 2026-03-13 00:45:56.430395 | orchestrator | 2026-03-13 00:45:56 | INFO  | A [5] ------ kolla-ceph-rgw 2026-03-13 00:45:56.430402 | orchestrator | 2026-03-13 00:45:56 | INFO  | A [5] ------ glance 2026-03-13 00:45:56.430408 | orchestrator | 2026-03-13 00:45:56 | INFO  | A [5] ------ cinder 2026-03-13 00:45:56.430415 | orchestrator | 2026-03-13 00:45:56 | INFO  | A [5] ------ nova 2026-03-13 00:45:56.430559 | orchestrator | 2026-03-13 00:45:56 | INFO  | A [4] ----- prometheus 2026-03-13 00:45:56.430568 | orchestrator | 2026-03-13 00:45:56 | INFO  | A [5] ------ grafana 2026-03-13 00:45:56.613479 | orchestrator | 2026-03-13 00:45:56 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-03-13 00:45:56.613598 | orchestrator | 2026-03-13 00:45:56 | INFO  | Tasks are running in the background 2026-03-13 00:45:59.525853 | orchestrator | 2026-03-13 00:45:59 | INFO  | No task IDs specified, wait for all currently running tasks 2026-03-13 00:46:01.638623 | orchestrator | 2026-03-13 00:46:01 | INFO  | Task b42c32fc-91a3-4dbd-b76a-1be951fea01c is in state STARTED 2026-03-13 00:46:01.638703 | orchestrator | 2026-03-13 00:46:01 | INFO  | Task 4f924bdd-5277-4480-8a62-59bf64f9ffda is in state STARTED 2026-03-13 00:46:01.639281 | orchestrator | 2026-03-13 00:46:01 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:46:01.640450 | orchestrator | 2026-03-13 00:46:01 | INFO  | Task 22cdd9aa-30ec-436c-b7e4-e90ea5cb4abf is in state STARTED 2026-03-13 00:46:01.640915 | orchestrator | 2026-03-13 00:46:01 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:46:01.643514 | orchestrator | 2026-03-13 00:46:01 | INFO  | Task 0803de47-81c6-42cb-b3be-820e1e1c7d1c is in state STARTED 2026-03-13 00:46:01.643897 | orchestrator | 2026-03-13 00:46:01 | INFO  | Task 014d7711-ff89-4575-988f-9cb9b8d866dd is in state STARTED 2026-03-13 00:46:01.643956 | orchestrator | 2026-03-13 00:46:01 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:46:04.685441 | orchestrator | 2026-03-13 00:46:04 | INFO  | Task b42c32fc-91a3-4dbd-b76a-1be951fea01c is in state STARTED 2026-03-13 00:46:04.685553 | orchestrator | 2026-03-13 00:46:04 | INFO  | Task 4f924bdd-5277-4480-8a62-59bf64f9ffda is in state STARTED 2026-03-13 00:46:04.687265 | orchestrator | 2026-03-13 00:46:04 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:46:04.687332 | orchestrator | 2026-03-13 00:46:04 | INFO  | Task 22cdd9aa-30ec-436c-b7e4-e90ea5cb4abf is in state STARTED 2026-03-13 00:46:04.687338 | orchestrator | 2026-03-13 00:46:04 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:46:04.687343 | orchestrator | 2026-03-13 00:46:04 | INFO  | Task 0803de47-81c6-42cb-b3be-820e1e1c7d1c is in state STARTED 2026-03-13 00:46:04.687535 | orchestrator | 2026-03-13 00:46:04 | INFO  | Task 014d7711-ff89-4575-988f-9cb9b8d866dd is in state STARTED 2026-03-13 00:46:04.687547 | orchestrator | 2026-03-13 00:46:04 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:46:07.752585 | orchestrator | 2026-03-13 00:46:07 | INFO  | Task b42c32fc-91a3-4dbd-b76a-1be951fea01c is in state STARTED 2026-03-13 00:46:07.752668 | orchestrator | 2026-03-13 00:46:07 | INFO  | Task 4f924bdd-5277-4480-8a62-59bf64f9ffda is in state STARTED 2026-03-13 00:46:07.756502 | orchestrator | 2026-03-13 00:46:07 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:46:07.760420 | orchestrator | 2026-03-13 00:46:07 | INFO  | Task 22cdd9aa-30ec-436c-b7e4-e90ea5cb4abf is in state STARTED 2026-03-13 00:46:07.760472 | orchestrator | 2026-03-13 00:46:07 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:46:07.760478 | orchestrator | 2026-03-13 00:46:07 | INFO  | Task 0803de47-81c6-42cb-b3be-820e1e1c7d1c is in state STARTED 2026-03-13 00:46:07.760484 | orchestrator | 2026-03-13 00:46:07 | INFO  | Task 014d7711-ff89-4575-988f-9cb9b8d866dd is in state STARTED 2026-03-13 00:46:07.760490 | orchestrator | 2026-03-13 00:46:07 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:46:10.923838 | orchestrator | 2026-03-13 00:46:10 | INFO  | Task b42c32fc-91a3-4dbd-b76a-1be951fea01c is in state STARTED 2026-03-13 00:46:10.927727 | orchestrator | 2026-03-13 00:46:10 | INFO  | Task 4f924bdd-5277-4480-8a62-59bf64f9ffda is in state STARTED 2026-03-13 00:46:10.929636 | orchestrator | 2026-03-13 00:46:10 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:46:10.930410 | orchestrator | 2026-03-13 00:46:10 | INFO  | Task 22cdd9aa-30ec-436c-b7e4-e90ea5cb4abf is in state STARTED 2026-03-13 00:46:10.930948 | orchestrator | 2026-03-13 00:46:10 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:46:10.932384 | orchestrator | 2026-03-13 00:46:10 | INFO  | Task 0803de47-81c6-42cb-b3be-820e1e1c7d1c is in state STARTED 2026-03-13 00:46:10.932941 | orchestrator | 2026-03-13 00:46:10 | INFO  | Task 014d7711-ff89-4575-988f-9cb9b8d866dd is in state STARTED 2026-03-13 00:46:10.933019 | orchestrator | 2026-03-13 00:46:10 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:46:14.041461 | orchestrator | 2026-03-13 00:46:14 | INFO  | Task b42c32fc-91a3-4dbd-b76a-1be951fea01c is in state STARTED 2026-03-13 00:46:14.041540 | orchestrator | 2026-03-13 00:46:14 | INFO  | Task 4f924bdd-5277-4480-8a62-59bf64f9ffda is in state STARTED 2026-03-13 00:46:14.041986 | orchestrator | 2026-03-13 00:46:14 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:46:14.042737 | orchestrator | 2026-03-13 00:46:14 | INFO  | Task 22cdd9aa-30ec-436c-b7e4-e90ea5cb4abf is in state STARTED 2026-03-13 00:46:14.043094 | orchestrator | 2026-03-13 00:46:14 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:46:14.043942 | orchestrator | 2026-03-13 00:46:14 | INFO  | Task 0803de47-81c6-42cb-b3be-820e1e1c7d1c is in state STARTED 2026-03-13 00:46:14.045156 | orchestrator | 2026-03-13 00:46:14 | INFO  | Task 014d7711-ff89-4575-988f-9cb9b8d866dd is in state STARTED 2026-03-13 00:46:14.045198 | orchestrator | 2026-03-13 00:46:14 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:46:17.230773 | orchestrator | 2026-03-13 00:46:17 | INFO  | Task b42c32fc-91a3-4dbd-b76a-1be951fea01c is in state STARTED 2026-03-13 00:46:17.230852 | orchestrator | 2026-03-13 00:46:17 | INFO  | Task 4f924bdd-5277-4480-8a62-59bf64f9ffda is in state STARTED 2026-03-13 00:46:17.230858 | orchestrator | 2026-03-13 00:46:17 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:46:17.231060 | orchestrator | 2026-03-13 00:46:17 | INFO  | Task 22cdd9aa-30ec-436c-b7e4-e90ea5cb4abf is in state STARTED 2026-03-13 00:46:17.231078 | orchestrator | 2026-03-13 00:46:17 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:46:17.231087 | orchestrator | 2026-03-13 00:46:17 | INFO  | Task 0803de47-81c6-42cb-b3be-820e1e1c7d1c is in state STARTED 2026-03-13 00:46:17.231094 | orchestrator | 2026-03-13 00:46:17 | INFO  | Task 014d7711-ff89-4575-988f-9cb9b8d866dd is in state STARTED 2026-03-13 00:46:17.231100 | orchestrator | 2026-03-13 00:46:17 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:46:20.368460 | orchestrator | 2026-03-13 00:46:20 | INFO  | Task b42c32fc-91a3-4dbd-b76a-1be951fea01c is in state STARTED 2026-03-13 00:46:20.368538 | orchestrator | 2026-03-13 00:46:20 | INFO  | Task 4f924bdd-5277-4480-8a62-59bf64f9ffda is in state STARTED 2026-03-13 00:46:20.368545 | orchestrator | 2026-03-13 00:46:20 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:46:20.369078 | orchestrator | 2026-03-13 00:46:20 | INFO  | Task 22cdd9aa-30ec-436c-b7e4-e90ea5cb4abf is in state STARTED 2026-03-13 00:46:20.369991 | orchestrator | 2026-03-13 00:46:20 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:46:20.371422 | orchestrator | 2026-03-13 00:46:20 | INFO  | Task 0803de47-81c6-42cb-b3be-820e1e1c7d1c is in state STARTED 2026-03-13 00:46:20.371912 | orchestrator | 2026-03-13 00:46:20 | INFO  | Task 014d7711-ff89-4575-988f-9cb9b8d866dd is in state STARTED 2026-03-13 00:46:20.375354 | orchestrator | 2026-03-13 00:46:20 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:46:23.455370 | orchestrator | 2026-03-13 00:46:23 | INFO  | Task b42c32fc-91a3-4dbd-b76a-1be951fea01c is in state STARTED 2026-03-13 00:46:23.474008 | orchestrator | 2026-03-13 00:46:23 | INFO  | Task ac683604-503a-485f-b693-a2d1141b1536 is in state STARTED 2026-03-13 00:46:23.477564 | orchestrator | 2026-03-13 00:46:23 | INFO  | Task 4f924bdd-5277-4480-8a62-59bf64f9ffda is in state STARTED 2026-03-13 00:46:23.480863 | orchestrator | 2026-03-13 00:46:23 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:46:23.485835 | orchestrator | 2026-03-13 00:46:23 | INFO  | Task 22cdd9aa-30ec-436c-b7e4-e90ea5cb4abf is in state STARTED 2026-03-13 00:46:23.493460 | orchestrator | 2026-03-13 00:46:23 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:46:23.498954 | orchestrator | 2026-03-13 00:46:23 | INFO  | Task 0803de47-81c6-42cb-b3be-820e1e1c7d1c is in state STARTED 2026-03-13 00:46:23.499174 | orchestrator | 2026-03-13 00:46:23 | INFO  | Task 014d7711-ff89-4575-988f-9cb9b8d866dd is in state SUCCESS 2026-03-13 00:46:23.499818 | orchestrator | 2026-03-13 00:46:23.499856 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-03-13 00:46:23.499867 | orchestrator | 2026-03-13 00:46:23.499875 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-03-13 00:46:23.499883 | orchestrator | Friday 13 March 2026 00:46:07 +0000 (0:00:00.528) 0:00:00.528 ********** 2026-03-13 00:46:23.499890 | orchestrator | changed: [testbed-manager] 2026-03-13 00:46:23.499898 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:46:23.499905 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:46:23.499911 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:46:23.499918 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:46:23.499925 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:46:23.499932 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:46:23.499940 | orchestrator | 2026-03-13 00:46:23.499947 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-03-13 00:46:23.499954 | orchestrator | Friday 13 March 2026 00:46:12 +0000 (0:00:04.525) 0:00:05.054 ********** 2026-03-13 00:46:23.499961 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-03-13 00:46:23.499969 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-03-13 00:46:23.499976 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-03-13 00:46:23.499983 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-03-13 00:46:23.499991 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-03-13 00:46:23.499998 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-03-13 00:46:23.500004 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-03-13 00:46:23.500010 | orchestrator | 2026-03-13 00:46:23.500016 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-03-13 00:46:23.500022 | orchestrator | Friday 13 March 2026 00:46:14 +0000 (0:00:01.983) 0:00:07.038 ********** 2026-03-13 00:46:23.500033 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-13 00:46:13.645980', 'end': '2026-03-13 00:46:13.655501', 'delta': '0:00:00.009521', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-13 00:46:23.500048 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-13 00:46:13.463522', 'end': '2026-03-13 00:46:13.471103', 'delta': '0:00:00.007581', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-13 00:46:23.500057 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-13 00:46:13.219549', 'end': '2026-03-13 00:46:13.230759', 'delta': '0:00:00.011210', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-13 00:46:23.500119 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-13 00:46:13.093417', 'end': '2026-03-13 00:46:13.101486', 'delta': '0:00:00.008069', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-13 00:46:23.500128 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-13 00:46:13.031577', 'end': '2026-03-13 00:46:13.035729', 'delta': '0:00:00.004152', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-13 00:46:23.500136 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-13 00:46:13.781064', 'end': '2026-03-13 00:46:13.787764', 'delta': '0:00:00.006700', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-13 00:46:23.500143 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-13 00:46:13.839938', 'end': '2026-03-13 00:46:13.848591', 'delta': '0:00:00.008653', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-13 00:46:23.500149 | orchestrator | 2026-03-13 00:46:23.500157 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-03-13 00:46:23.500170 | orchestrator | Friday 13 March 2026 00:46:16 +0000 (0:00:01.985) 0:00:09.023 ********** 2026-03-13 00:46:23.500177 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-03-13 00:46:23.500184 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-03-13 00:46:23.500191 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-03-13 00:46:23.500199 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-03-13 00:46:23.500206 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-03-13 00:46:23.500213 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-03-13 00:46:23.500237 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-03-13 00:46:23.500243 | orchestrator | 2026-03-13 00:46:23.500249 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-03-13 00:46:23.500255 | orchestrator | Friday 13 March 2026 00:46:17 +0000 (0:00:01.690) 0:00:10.714 ********** 2026-03-13 00:46:23.500262 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-03-13 00:46:23.500269 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-03-13 00:46:23.500276 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-03-13 00:46:23.500283 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-03-13 00:46:23.500289 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-03-13 00:46:23.500296 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-03-13 00:46:23.500303 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-03-13 00:46:23.500310 | orchestrator | 2026-03-13 00:46:23.500317 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 00:46:23.500331 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 00:46:23.500339 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 00:46:23.500347 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 00:46:23.500354 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 00:46:23.500361 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 00:46:23.500368 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 00:46:23.500375 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 00:46:23.500382 | orchestrator | 2026-03-13 00:46:23.500389 | orchestrator | 2026-03-13 00:46:23.500395 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 00:46:23.500402 | orchestrator | Friday 13 March 2026 00:46:20 +0000 (0:00:02.547) 0:00:13.262 ********** 2026-03-13 00:46:23.500409 | orchestrator | =============================================================================== 2026-03-13 00:46:23.500417 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.53s 2026-03-13 00:46:23.500425 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.55s 2026-03-13 00:46:23.500434 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 1.99s 2026-03-13 00:46:23.500645 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.98s 2026-03-13 00:46:23.500659 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.69s 2026-03-13 00:46:23.500667 | orchestrator | 2026-03-13 00:46:23 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:46:26.817103 | orchestrator | 2026-03-13 00:46:26 | INFO  | Task b42c32fc-91a3-4dbd-b76a-1be951fea01c is in state STARTED 2026-03-13 00:46:26.817209 | orchestrator | 2026-03-13 00:46:26 | INFO  | Task ac683604-503a-485f-b693-a2d1141b1536 is in state STARTED 2026-03-13 00:46:26.817264 | orchestrator | 2026-03-13 00:46:26 | INFO  | Task 4f924bdd-5277-4480-8a62-59bf64f9ffda is in state STARTED 2026-03-13 00:46:26.817271 | orchestrator | 2026-03-13 00:46:26 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:46:26.817277 | orchestrator | 2026-03-13 00:46:26 | INFO  | Task 22cdd9aa-30ec-436c-b7e4-e90ea5cb4abf is in state STARTED 2026-03-13 00:46:26.817296 | orchestrator | 2026-03-13 00:46:26 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:46:26.817300 | orchestrator | 2026-03-13 00:46:26 | INFO  | Task 0803de47-81c6-42cb-b3be-820e1e1c7d1c is in state STARTED 2026-03-13 00:46:26.817304 | orchestrator | 2026-03-13 00:46:26 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:46:29.942316 | orchestrator | 2026-03-13 00:46:29 | INFO  | Task b42c32fc-91a3-4dbd-b76a-1be951fea01c is in state STARTED 2026-03-13 00:46:29.942364 | orchestrator | 2026-03-13 00:46:29 | INFO  | Task ac683604-503a-485f-b693-a2d1141b1536 is in state STARTED 2026-03-13 00:46:29.942370 | orchestrator | 2026-03-13 00:46:29 | INFO  | Task 4f924bdd-5277-4480-8a62-59bf64f9ffda is in state STARTED 2026-03-13 00:46:29.942374 | orchestrator | 2026-03-13 00:46:29 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:46:29.942378 | orchestrator | 2026-03-13 00:46:29 | INFO  | Task 22cdd9aa-30ec-436c-b7e4-e90ea5cb4abf is in state STARTED 2026-03-13 00:46:29.942382 | orchestrator | 2026-03-13 00:46:29 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:46:29.942386 | orchestrator | 2026-03-13 00:46:29 | INFO  | Task 0803de47-81c6-42cb-b3be-820e1e1c7d1c is in state STARTED 2026-03-13 00:46:29.942390 | orchestrator | 2026-03-13 00:46:29 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:46:32.976748 | orchestrator | 2026-03-13 00:46:32 | INFO  | Task b42c32fc-91a3-4dbd-b76a-1be951fea01c is in state STARTED 2026-03-13 00:46:32.978528 | orchestrator | 2026-03-13 00:46:32 | INFO  | Task ac683604-503a-485f-b693-a2d1141b1536 is in state STARTED 2026-03-13 00:46:32.979330 | orchestrator | 2026-03-13 00:46:32 | INFO  | Task 4f924bdd-5277-4480-8a62-59bf64f9ffda is in state STARTED 2026-03-13 00:46:32.981449 | orchestrator | 2026-03-13 00:46:32 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:46:32.982988 | orchestrator | 2026-03-13 00:46:32 | INFO  | Task 22cdd9aa-30ec-436c-b7e4-e90ea5cb4abf is in state STARTED 2026-03-13 00:46:32.984552 | orchestrator | 2026-03-13 00:46:32 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:46:32.985760 | orchestrator | 2026-03-13 00:46:32 | INFO  | Task 0803de47-81c6-42cb-b3be-820e1e1c7d1c is in state STARTED 2026-03-13 00:46:32.985876 | orchestrator | 2026-03-13 00:46:32 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:46:36.086541 | orchestrator | 2026-03-13 00:46:36 | INFO  | Task b42c32fc-91a3-4dbd-b76a-1be951fea01c is in state STARTED 2026-03-13 00:46:36.086634 | orchestrator | 2026-03-13 00:46:36 | INFO  | Task ac683604-503a-485f-b693-a2d1141b1536 is in state STARTED 2026-03-13 00:46:36.086644 | orchestrator | 2026-03-13 00:46:36 | INFO  | Task 4f924bdd-5277-4480-8a62-59bf64f9ffda is in state STARTED 2026-03-13 00:46:36.086652 | orchestrator | 2026-03-13 00:46:36 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:46:36.086660 | orchestrator | 2026-03-13 00:46:36 | INFO  | Task 22cdd9aa-30ec-436c-b7e4-e90ea5cb4abf is in state STARTED 2026-03-13 00:46:36.086693 | orchestrator | 2026-03-13 00:46:36 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:46:36.086700 | orchestrator | 2026-03-13 00:46:36 | INFO  | Task 0803de47-81c6-42cb-b3be-820e1e1c7d1c is in state STARTED 2026-03-13 00:46:36.086707 | orchestrator | 2026-03-13 00:46:36 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:46:39.267267 | orchestrator | 2026-03-13 00:46:39 | INFO  | Task b42c32fc-91a3-4dbd-b76a-1be951fea01c is in state STARTED 2026-03-13 00:46:39.267318 | orchestrator | 2026-03-13 00:46:39 | INFO  | Task ac683604-503a-485f-b693-a2d1141b1536 is in state STARTED 2026-03-13 00:46:39.267323 | orchestrator | 2026-03-13 00:46:39 | INFO  | Task 4f924bdd-5277-4480-8a62-59bf64f9ffda is in state STARTED 2026-03-13 00:46:39.267327 | orchestrator | 2026-03-13 00:46:39 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:46:39.267330 | orchestrator | 2026-03-13 00:46:39 | INFO  | Task 22cdd9aa-30ec-436c-b7e4-e90ea5cb4abf is in state STARTED 2026-03-13 00:46:39.267333 | orchestrator | 2026-03-13 00:46:39 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:46:39.267336 | orchestrator | 2026-03-13 00:46:39 | INFO  | Task 0803de47-81c6-42cb-b3be-820e1e1c7d1c is in state STARTED 2026-03-13 00:46:39.267339 | orchestrator | 2026-03-13 00:46:39 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:46:42.168297 | orchestrator | 2026-03-13 00:46:42 | INFO  | Task b42c32fc-91a3-4dbd-b76a-1be951fea01c is in state STARTED 2026-03-13 00:46:42.175550 | orchestrator | 2026-03-13 00:46:42 | INFO  | Task ac683604-503a-485f-b693-a2d1141b1536 is in state STARTED 2026-03-13 00:46:42.182724 | orchestrator | 2026-03-13 00:46:42 | INFO  | Task 4f924bdd-5277-4480-8a62-59bf64f9ffda is in state STARTED 2026-03-13 00:46:42.187808 | orchestrator | 2026-03-13 00:46:42 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:46:42.194760 | orchestrator | 2026-03-13 00:46:42 | INFO  | Task 22cdd9aa-30ec-436c-b7e4-e90ea5cb4abf is in state STARTED 2026-03-13 00:46:42.213340 | orchestrator | 2026-03-13 00:46:42 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:46:42.213394 | orchestrator | 2026-03-13 00:46:42 | INFO  | Task 0803de47-81c6-42cb-b3be-820e1e1c7d1c is in state STARTED 2026-03-13 00:46:42.213403 | orchestrator | 2026-03-13 00:46:42 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:46:45.334525 | orchestrator | 2026-03-13 00:46:45 | INFO  | Task b42c32fc-91a3-4dbd-b76a-1be951fea01c is in state STARTED 2026-03-13 00:46:45.334629 | orchestrator | 2026-03-13 00:46:45 | INFO  | Task ac683604-503a-485f-b693-a2d1141b1536 is in state STARTED 2026-03-13 00:46:45.334640 | orchestrator | 2026-03-13 00:46:45 | INFO  | Task 4f924bdd-5277-4480-8a62-59bf64f9ffda is in state STARTED 2026-03-13 00:46:45.334647 | orchestrator | 2026-03-13 00:46:45 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:46:45.334654 | orchestrator | 2026-03-13 00:46:45 | INFO  | Task 22cdd9aa-30ec-436c-b7e4-e90ea5cb4abf is in state STARTED 2026-03-13 00:46:45.334660 | orchestrator | 2026-03-13 00:46:45 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:46:45.334667 | orchestrator | 2026-03-13 00:46:45 | INFO  | Task 0803de47-81c6-42cb-b3be-820e1e1c7d1c is in state SUCCESS 2026-03-13 00:46:45.334674 | orchestrator | 2026-03-13 00:46:45 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:46:48.585723 | orchestrator | 2026-03-13 00:46:48 | INFO  | Task b42c32fc-91a3-4dbd-b76a-1be951fea01c is in state STARTED 2026-03-13 00:46:48.585846 | orchestrator | 2026-03-13 00:46:48 | INFO  | Task ac683604-503a-485f-b693-a2d1141b1536 is in state STARTED 2026-03-13 00:46:48.585859 | orchestrator | 2026-03-13 00:46:48 | INFO  | Task 4f924bdd-5277-4480-8a62-59bf64f9ffda is in state STARTED 2026-03-13 00:46:48.585869 | orchestrator | 2026-03-13 00:46:48 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:46:48.585877 | orchestrator | 2026-03-13 00:46:48 | INFO  | Task 22cdd9aa-30ec-436c-b7e4-e90ea5cb4abf is in state STARTED 2026-03-13 00:46:48.585886 | orchestrator | 2026-03-13 00:46:48 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:46:48.585895 | orchestrator | 2026-03-13 00:46:48 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:46:51.650096 | orchestrator | 2026-03-13 00:46:51 | INFO  | Task b42c32fc-91a3-4dbd-b76a-1be951fea01c is in state STARTED 2026-03-13 00:46:51.650160 | orchestrator | 2026-03-13 00:46:51 | INFO  | Task ac683604-503a-485f-b693-a2d1141b1536 is in state STARTED 2026-03-13 00:46:51.650169 | orchestrator | 2026-03-13 00:46:51 | INFO  | Task 4f924bdd-5277-4480-8a62-59bf64f9ffda is in state STARTED 2026-03-13 00:46:51.650177 | orchestrator | 2026-03-13 00:46:51 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:46:51.650184 | orchestrator | 2026-03-13 00:46:51 | INFO  | Task 22cdd9aa-30ec-436c-b7e4-e90ea5cb4abf is in state STARTED 2026-03-13 00:46:51.650191 | orchestrator | 2026-03-13 00:46:51 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:46:51.650198 | orchestrator | 2026-03-13 00:46:51 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:46:54.693606 | orchestrator | 2026-03-13 00:46:54 | INFO  | Task b42c32fc-91a3-4dbd-b76a-1be951fea01c is in state STARTED 2026-03-13 00:46:54.700356 | orchestrator | 2026-03-13 00:46:54 | INFO  | Task ac683604-503a-485f-b693-a2d1141b1536 is in state STARTED 2026-03-13 00:46:54.704255 | orchestrator | 2026-03-13 00:46:54 | INFO  | Task 4f924bdd-5277-4480-8a62-59bf64f9ffda is in state STARTED 2026-03-13 00:46:54.704314 | orchestrator | 2026-03-13 00:46:54 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:46:54.705482 | orchestrator | 2026-03-13 00:46:54 | INFO  | Task 22cdd9aa-30ec-436c-b7e4-e90ea5cb4abf is in state STARTED 2026-03-13 00:46:54.706947 | orchestrator | 2026-03-13 00:46:54 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:46:54.706972 | orchestrator | 2026-03-13 00:46:54 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:46:57.739485 | orchestrator | 2026-03-13 00:46:57 | INFO  | Task b42c32fc-91a3-4dbd-b76a-1be951fea01c is in state STARTED 2026-03-13 00:46:57.739580 | orchestrator | 2026-03-13 00:46:57 | INFO  | Task ac683604-503a-485f-b693-a2d1141b1536 is in state STARTED 2026-03-13 00:46:57.740880 | orchestrator | 2026-03-13 00:46:57 | INFO  | Task 4f924bdd-5277-4480-8a62-59bf64f9ffda is in state SUCCESS 2026-03-13 00:46:57.747117 | orchestrator | 2026-03-13 00:46:57 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:46:57.750999 | orchestrator | 2026-03-13 00:46:57 | INFO  | Task 22cdd9aa-30ec-436c-b7e4-e90ea5cb4abf is in state STARTED 2026-03-13 00:46:57.752842 | orchestrator | 2026-03-13 00:46:57 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:46:57.752968 | orchestrator | 2026-03-13 00:46:57 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:47:00.785891 | orchestrator | 2026-03-13 00:47:00 | INFO  | Task b42c32fc-91a3-4dbd-b76a-1be951fea01c is in state STARTED 2026-03-13 00:47:00.786362 | orchestrator | 2026-03-13 00:47:00 | INFO  | Task ac683604-503a-485f-b693-a2d1141b1536 is in state STARTED 2026-03-13 00:47:00.787115 | orchestrator | 2026-03-13 00:47:00 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:47:00.787810 | orchestrator | 2026-03-13 00:47:00 | INFO  | Task 22cdd9aa-30ec-436c-b7e4-e90ea5cb4abf is in state STARTED 2026-03-13 00:47:00.788588 | orchestrator | 2026-03-13 00:47:00 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:47:00.788614 | orchestrator | 2026-03-13 00:47:00 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:47:03.845920 | orchestrator | 2026-03-13 00:47:03 | INFO  | Task b42c32fc-91a3-4dbd-b76a-1be951fea01c is in state STARTED 2026-03-13 00:47:03.851369 | orchestrator | 2026-03-13 00:47:03 | INFO  | Task ac683604-503a-485f-b693-a2d1141b1536 is in state STARTED 2026-03-13 00:47:03.853359 | orchestrator | 2026-03-13 00:47:03 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:47:03.858223 | orchestrator | 2026-03-13 00:47:03 | INFO  | Task 22cdd9aa-30ec-436c-b7e4-e90ea5cb4abf is in state STARTED 2026-03-13 00:47:03.858302 | orchestrator | 2026-03-13 00:47:03 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:47:03.859055 | orchestrator | 2026-03-13 00:47:03 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:47:06.920960 | orchestrator | 2026-03-13 00:47:06 | INFO  | Task b42c32fc-91a3-4dbd-b76a-1be951fea01c is in state STARTED 2026-03-13 00:47:06.922677 | orchestrator | 2026-03-13 00:47:06 | INFO  | Task ac683604-503a-485f-b693-a2d1141b1536 is in state STARTED 2026-03-13 00:47:06.925815 | orchestrator | 2026-03-13 00:47:06 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:47:06.931914 | orchestrator | 2026-03-13 00:47:06 | INFO  | Task 22cdd9aa-30ec-436c-b7e4-e90ea5cb4abf is in state STARTED 2026-03-13 00:47:06.940193 | orchestrator | 2026-03-13 00:47:06 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:47:06.940310 | orchestrator | 2026-03-13 00:47:06 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:47:10.008481 | orchestrator | 2026-03-13 00:47:10 | INFO  | Task b42c32fc-91a3-4dbd-b76a-1be951fea01c is in state STARTED 2026-03-13 00:47:10.013121 | orchestrator | 2026-03-13 00:47:10 | INFO  | Task ac683604-503a-485f-b693-a2d1141b1536 is in state STARTED 2026-03-13 00:47:10.021480 | orchestrator | 2026-03-13 00:47:10 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:47:10.021539 | orchestrator | 2026-03-13 00:47:10 | INFO  | Task 22cdd9aa-30ec-436c-b7e4-e90ea5cb4abf is in state STARTED 2026-03-13 00:47:10.021546 | orchestrator | 2026-03-13 00:47:10 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:47:10.021553 | orchestrator | 2026-03-13 00:47:10 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:47:13.082657 | orchestrator | 2026-03-13 00:47:13 | INFO  | Task b42c32fc-91a3-4dbd-b76a-1be951fea01c is in state STARTED 2026-03-13 00:47:13.084741 | orchestrator | 2026-03-13 00:47:13 | INFO  | Task ac683604-503a-485f-b693-a2d1141b1536 is in state STARTED 2026-03-13 00:47:13.086927 | orchestrator | 2026-03-13 00:47:13 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:47:13.087529 | orchestrator | 2026-03-13 00:47:13 | INFO  | Task 22cdd9aa-30ec-436c-b7e4-e90ea5cb4abf is in state STARTED 2026-03-13 00:47:13.089132 | orchestrator | 2026-03-13 00:47:13 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:47:13.089182 | orchestrator | 2026-03-13 00:47:13 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:47:16.137463 | orchestrator | 2026-03-13 00:47:16 | INFO  | Task b42c32fc-91a3-4dbd-b76a-1be951fea01c is in state STARTED 2026-03-13 00:47:16.147234 | orchestrator | 2026-03-13 00:47:16 | INFO  | Task ac683604-503a-485f-b693-a2d1141b1536 is in state STARTED 2026-03-13 00:47:16.148575 | orchestrator | 2026-03-13 00:47:16 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:47:16.152752 | orchestrator | 2026-03-13 00:47:16 | INFO  | Task 22cdd9aa-30ec-436c-b7e4-e90ea5cb4abf is in state STARTED 2026-03-13 00:47:16.155761 | orchestrator | 2026-03-13 00:47:16 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:47:16.155808 | orchestrator | 2026-03-13 00:47:16 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:47:19.220435 | orchestrator | 2026-03-13 00:47:19 | INFO  | Task b42c32fc-91a3-4dbd-b76a-1be951fea01c is in state STARTED 2026-03-13 00:47:19.221275 | orchestrator | 2026-03-13 00:47:19 | INFO  | Task ac683604-503a-485f-b693-a2d1141b1536 is in state STARTED 2026-03-13 00:47:19.222152 | orchestrator | 2026-03-13 00:47:19 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:47:19.222171 | orchestrator | 2026-03-13 00:47:19 | INFO  | Task 22cdd9aa-30ec-436c-b7e4-e90ea5cb4abf is in state STARTED 2026-03-13 00:47:19.222406 | orchestrator | 2026-03-13 00:47:19 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:47:19.222421 | orchestrator | 2026-03-13 00:47:19 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:47:22.274328 | orchestrator | 2026-03-13 00:47:22 | INFO  | Task b42c32fc-91a3-4dbd-b76a-1be951fea01c is in state STARTED 2026-03-13 00:47:22.291208 | orchestrator | 2026-03-13 00:47:22 | INFO  | Task ac683604-503a-485f-b693-a2d1141b1536 is in state STARTED 2026-03-13 00:47:22.291298 | orchestrator | 2026-03-13 00:47:22 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:47:22.291309 | orchestrator | 2026-03-13 00:47:22 | INFO  | Task 22cdd9aa-30ec-436c-b7e4-e90ea5cb4abf is in state STARTED 2026-03-13 00:47:22.291316 | orchestrator | 2026-03-13 00:47:22 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:47:22.291322 | orchestrator | 2026-03-13 00:47:22 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:47:25.348199 | orchestrator | 2026-03-13 00:47:25 | INFO  | Task b42c32fc-91a3-4dbd-b76a-1be951fea01c is in state STARTED 2026-03-13 00:47:25.348817 | orchestrator | 2026-03-13 00:47:25 | INFO  | Task ac683604-503a-485f-b693-a2d1141b1536 is in state STARTED 2026-03-13 00:47:25.352610 | orchestrator | 2026-03-13 00:47:25 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:47:25.353707 | orchestrator | 2026-03-13 00:47:25 | INFO  | Task 22cdd9aa-30ec-436c-b7e4-e90ea5cb4abf is in state STARTED 2026-03-13 00:47:25.355797 | orchestrator | 2026-03-13 00:47:25 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:47:25.355851 | orchestrator | 2026-03-13 00:47:25 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:47:28.402996 | orchestrator | 2026-03-13 00:47:28.403079 | orchestrator | 2026-03-13 00:47:28.403089 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-03-13 00:47:28.403096 | orchestrator | 2026-03-13 00:47:28.403103 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-03-13 00:47:28.403111 | orchestrator | Friday 13 March 2026 00:46:08 +0000 (0:00:01.349) 0:00:01.349 ********** 2026-03-13 00:47:28.403117 | orchestrator | ok: [testbed-manager] => { 2026-03-13 00:47:28.403145 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-03-13 00:47:28.403154 | orchestrator | } 2026-03-13 00:47:28.403160 | orchestrator | 2026-03-13 00:47:28.403166 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-03-13 00:47:28.403172 | orchestrator | Friday 13 March 2026 00:46:09 +0000 (0:00:00.418) 0:00:01.768 ********** 2026-03-13 00:47:28.403177 | orchestrator | ok: [testbed-manager] 2026-03-13 00:47:28.403185 | orchestrator | 2026-03-13 00:47:28.403191 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-03-13 00:47:28.403215 | orchestrator | Friday 13 March 2026 00:46:10 +0000 (0:00:01.490) 0:00:03.259 ********** 2026-03-13 00:47:28.403221 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-03-13 00:47:28.403228 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-03-13 00:47:28.403234 | orchestrator | 2026-03-13 00:47:28.403299 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-03-13 00:47:28.403307 | orchestrator | Friday 13 March 2026 00:46:11 +0000 (0:00:00.918) 0:00:04.177 ********** 2026-03-13 00:47:28.403313 | orchestrator | changed: [testbed-manager] 2026-03-13 00:47:28.403319 | orchestrator | 2026-03-13 00:47:28.403325 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-03-13 00:47:28.403331 | orchestrator | Friday 13 March 2026 00:46:13 +0000 (0:00:01.902) 0:00:06.080 ********** 2026-03-13 00:47:28.403337 | orchestrator | changed: [testbed-manager] 2026-03-13 00:47:28.403343 | orchestrator | 2026-03-13 00:47:28.403350 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-03-13 00:47:28.403356 | orchestrator | Friday 13 March 2026 00:46:15 +0000 (0:00:02.068) 0:00:08.148 ********** 2026-03-13 00:47:28.403363 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-03-13 00:47:28.403369 | orchestrator | ok: [testbed-manager] 2026-03-13 00:47:28.403375 | orchestrator | 2026-03-13 00:47:28.403382 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-03-13 00:47:28.403388 | orchestrator | Friday 13 March 2026 00:46:40 +0000 (0:00:25.060) 0:00:33.208 ********** 2026-03-13 00:47:28.403394 | orchestrator | changed: [testbed-manager] 2026-03-13 00:47:28.403401 | orchestrator | 2026-03-13 00:47:28.403407 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 00:47:28.403414 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 00:47:28.403422 | orchestrator | 2026-03-13 00:47:28.403428 | orchestrator | 2026-03-13 00:47:28.403435 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 00:47:28.403441 | orchestrator | Friday 13 March 2026 00:46:44 +0000 (0:00:03.359) 0:00:36.567 ********** 2026-03-13 00:47:28.403447 | orchestrator | =============================================================================== 2026-03-13 00:47:28.403453 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 25.06s 2026-03-13 00:47:28.403460 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 3.36s 2026-03-13 00:47:28.403466 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 2.07s 2026-03-13 00:47:28.403473 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 1.90s 2026-03-13 00:47:28.403479 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.49s 2026-03-13 00:47:28.403486 | orchestrator | osism.services.homer : Create required directories ---------------------- 0.92s 2026-03-13 00:47:28.403493 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.42s 2026-03-13 00:47:28.403499 | orchestrator | 2026-03-13 00:47:28.403506 | orchestrator | 2026-03-13 00:47:28.403513 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-03-13 00:47:28.403519 | orchestrator | 2026-03-13 00:47:28.403526 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-03-13 00:47:28.403538 | orchestrator | Friday 13 March 2026 00:46:09 +0000 (0:00:00.843) 0:00:00.843 ********** 2026-03-13 00:47:28.403544 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-03-13 00:47:28.403552 | orchestrator | 2026-03-13 00:47:28.403558 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-03-13 00:47:28.403564 | orchestrator | Friday 13 March 2026 00:46:09 +0000 (0:00:00.786) 0:00:01.630 ********** 2026-03-13 00:47:28.403570 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-03-13 00:47:28.403576 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-03-13 00:47:28.403583 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-03-13 00:47:28.403600 | orchestrator | 2026-03-13 00:47:28.403607 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-03-13 00:47:28.403620 | orchestrator | Friday 13 March 2026 00:46:11 +0000 (0:00:01.655) 0:00:03.285 ********** 2026-03-13 00:47:28.403626 | orchestrator | changed: [testbed-manager] 2026-03-13 00:47:28.403632 | orchestrator | 2026-03-13 00:47:28.403638 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-03-13 00:47:28.403644 | orchestrator | Friday 13 March 2026 00:46:13 +0000 (0:00:02.087) 0:00:05.373 ********** 2026-03-13 00:47:28.403666 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-03-13 00:47:28.403673 | orchestrator | ok: [testbed-manager] 2026-03-13 00:47:28.403678 | orchestrator | 2026-03-13 00:47:28.403684 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-03-13 00:47:28.403689 | orchestrator | Friday 13 March 2026 00:46:47 +0000 (0:00:34.111) 0:00:39.484 ********** 2026-03-13 00:47:28.403694 | orchestrator | changed: [testbed-manager] 2026-03-13 00:47:28.403700 | orchestrator | 2026-03-13 00:47:28.403705 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-03-13 00:47:28.403711 | orchestrator | Friday 13 March 2026 00:46:49 +0000 (0:00:01.766) 0:00:41.251 ********** 2026-03-13 00:47:28.403717 | orchestrator | ok: [testbed-manager] 2026-03-13 00:47:28.403723 | orchestrator | 2026-03-13 00:47:28.403729 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-03-13 00:47:28.403735 | orchestrator | Friday 13 March 2026 00:46:50 +0000 (0:00:00.743) 0:00:41.994 ********** 2026-03-13 00:47:28.403741 | orchestrator | changed: [testbed-manager] 2026-03-13 00:47:28.403748 | orchestrator | 2026-03-13 00:47:28.403753 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-03-13 00:47:28.403761 | orchestrator | Friday 13 March 2026 00:46:52 +0000 (0:00:02.424) 0:00:44.419 ********** 2026-03-13 00:47:28.403767 | orchestrator | changed: [testbed-manager] 2026-03-13 00:47:28.403772 | orchestrator | 2026-03-13 00:47:28.403779 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-03-13 00:47:28.403786 | orchestrator | Friday 13 March 2026 00:46:53 +0000 (0:00:00.839) 0:00:45.260 ********** 2026-03-13 00:47:28.403793 | orchestrator | changed: [testbed-manager] 2026-03-13 00:47:28.403798 | orchestrator | 2026-03-13 00:47:28.403804 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-03-13 00:47:28.403809 | orchestrator | Friday 13 March 2026 00:46:54 +0000 (0:00:01.267) 0:00:46.527 ********** 2026-03-13 00:47:28.403815 | orchestrator | ok: [testbed-manager] 2026-03-13 00:47:28.403822 | orchestrator | 2026-03-13 00:47:28.403828 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 00:47:28.403835 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 00:47:28.403842 | orchestrator | 2026-03-13 00:47:28.403848 | orchestrator | 2026-03-13 00:47:28.403855 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 00:47:28.403866 | orchestrator | Friday 13 March 2026 00:46:55 +0000 (0:00:00.594) 0:00:47.122 ********** 2026-03-13 00:47:28.403871 | orchestrator | =============================================================================== 2026-03-13 00:47:28.403878 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 34.11s 2026-03-13 00:47:28.403884 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.42s 2026-03-13 00:47:28.403890 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.09s 2026-03-13 00:47:28.403897 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.77s 2026-03-13 00:47:28.403903 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.66s 2026-03-13 00:47:28.403909 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.27s 2026-03-13 00:47:28.403915 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.84s 2026-03-13 00:47:28.403920 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.79s 2026-03-13 00:47:28.403927 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.74s 2026-03-13 00:47:28.403934 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.59s 2026-03-13 00:47:28.403939 | orchestrator | 2026-03-13 00:47:28.403946 | orchestrator | 2026-03-13 00:47:28.403952 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-03-13 00:47:28.403959 | orchestrator | 2026-03-13 00:47:28.403965 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-03-13 00:47:28.403972 | orchestrator | Friday 13 March 2026 00:46:24 +0000 (0:00:00.257) 0:00:00.257 ********** 2026-03-13 00:47:28.403979 | orchestrator | ok: [testbed-manager] 2026-03-13 00:47:28.403986 | orchestrator | 2026-03-13 00:47:28.403992 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-03-13 00:47:28.403999 | orchestrator | Friday 13 March 2026 00:46:26 +0000 (0:00:01.640) 0:00:01.897 ********** 2026-03-13 00:47:28.404044 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-03-13 00:47:28.404051 | orchestrator | 2026-03-13 00:47:28.404057 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-03-13 00:47:28.404063 | orchestrator | Friday 13 March 2026 00:46:27 +0000 (0:00:00.798) 0:00:02.696 ********** 2026-03-13 00:47:28.404070 | orchestrator | changed: [testbed-manager] 2026-03-13 00:47:28.404076 | orchestrator | 2026-03-13 00:47:28.404083 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-03-13 00:47:28.404089 | orchestrator | Friday 13 March 2026 00:46:28 +0000 (0:00:01.498) 0:00:04.194 ********** 2026-03-13 00:47:28.404095 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-03-13 00:47:28.404101 | orchestrator | ok: [testbed-manager] 2026-03-13 00:47:28.404108 | orchestrator | 2026-03-13 00:47:28.404114 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-03-13 00:47:28.404120 | orchestrator | Friday 13 March 2026 00:47:22 +0000 (0:00:53.595) 0:00:57.790 ********** 2026-03-13 00:47:28.404127 | orchestrator | changed: [testbed-manager] 2026-03-13 00:47:28.404133 | orchestrator | 2026-03-13 00:47:28.404139 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 00:47:28.404145 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 00:47:28.404152 | orchestrator | 2026-03-13 00:47:28.404158 | orchestrator | 2026-03-13 00:47:28.404164 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 00:47:28.404178 | orchestrator | Friday 13 March 2026 00:47:25 +0000 (0:00:03.558) 0:01:01.349 ********** 2026-03-13 00:47:28.404184 | orchestrator | =============================================================================== 2026-03-13 00:47:28.404190 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 53.60s 2026-03-13 00:47:28.404197 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.56s 2026-03-13 00:47:28.404208 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.64s 2026-03-13 00:47:28.404214 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.50s 2026-03-13 00:47:28.404220 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.80s 2026-03-13 00:47:28.404227 | orchestrator | 2026-03-13 00:47:28 | INFO  | Task b42c32fc-91a3-4dbd-b76a-1be951fea01c is in state STARTED 2026-03-13 00:47:28.404234 | orchestrator | 2026-03-13 00:47:28 | INFO  | Task ac683604-503a-485f-b693-a2d1141b1536 is in state SUCCESS 2026-03-13 00:47:28.404282 | orchestrator | 2026-03-13 00:47:28 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:47:28.404295 | orchestrator | 2026-03-13 00:47:28 | INFO  | Task 22cdd9aa-30ec-436c-b7e4-e90ea5cb4abf is in state STARTED 2026-03-13 00:47:28.404302 | orchestrator | 2026-03-13 00:47:28 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:47:28.404309 | orchestrator | 2026-03-13 00:47:28 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:47:31.448538 | orchestrator | 2026-03-13 00:47:31 | INFO  | Task b42c32fc-91a3-4dbd-b76a-1be951fea01c is in state STARTED 2026-03-13 00:47:31.448606 | orchestrator | 2026-03-13 00:47:31 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:47:31.448612 | orchestrator | 2026-03-13 00:47:31 | INFO  | Task 22cdd9aa-30ec-436c-b7e4-e90ea5cb4abf is in state STARTED 2026-03-13 00:47:31.448616 | orchestrator | 2026-03-13 00:47:31 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:47:31.448621 | orchestrator | 2026-03-13 00:47:31 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:47:34.479336 | orchestrator | 2026-03-13 00:47:34 | INFO  | Task b42c32fc-91a3-4dbd-b76a-1be951fea01c is in state STARTED 2026-03-13 00:47:34.479825 | orchestrator | 2026-03-13 00:47:34 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:47:34.483193 | orchestrator | 2026-03-13 00:47:34 | INFO  | Task 22cdd9aa-30ec-436c-b7e4-e90ea5cb4abf is in state STARTED 2026-03-13 00:47:34.485412 | orchestrator | 2026-03-13 00:47:34 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:47:34.485492 | orchestrator | 2026-03-13 00:47:34 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:47:37.523071 | orchestrator | 2026-03-13 00:47:37 | INFO  | Task b42c32fc-91a3-4dbd-b76a-1be951fea01c is in state STARTED 2026-03-13 00:47:37.523600 | orchestrator | 2026-03-13 00:47:37 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:47:37.525953 | orchestrator | 2026-03-13 00:47:37 | INFO  | Task 22cdd9aa-30ec-436c-b7e4-e90ea5cb4abf is in state STARTED 2026-03-13 00:47:37.526679 | orchestrator | 2026-03-13 00:47:37 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:47:37.526702 | orchestrator | 2026-03-13 00:47:37 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:47:40.580695 | orchestrator | 2026-03-13 00:47:40 | INFO  | Task b42c32fc-91a3-4dbd-b76a-1be951fea01c is in state STARTED 2026-03-13 00:47:40.583390 | orchestrator | 2026-03-13 00:47:40 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:47:40.584121 | orchestrator | 2026-03-13 00:47:40 | INFO  | Task 22cdd9aa-30ec-436c-b7e4-e90ea5cb4abf is in state SUCCESS 2026-03-13 00:47:40.584986 | orchestrator | 2026-03-13 00:47:40.585005 | orchestrator | 2026-03-13 00:47:40.585011 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-13 00:47:40.585017 | orchestrator | 2026-03-13 00:47:40.585023 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-13 00:47:40.585044 | orchestrator | Friday 13 March 2026 00:46:09 +0000 (0:00:00.433) 0:00:00.433 ********** 2026-03-13 00:47:40.585057 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-03-13 00:47:40.585062 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-03-13 00:47:40.585067 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-03-13 00:47:40.585073 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-03-13 00:47:40.585078 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-03-13 00:47:40.585083 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-03-13 00:47:40.585089 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-03-13 00:47:40.585094 | orchestrator | 2026-03-13 00:47:40.585099 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-03-13 00:47:40.585104 | orchestrator | 2026-03-13 00:47:40.585109 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-03-13 00:47:40.585114 | orchestrator | Friday 13 March 2026 00:46:10 +0000 (0:00:01.211) 0:00:01.644 ********** 2026-03-13 00:47:40.585128 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-13 00:47:40.585137 | orchestrator | 2026-03-13 00:47:40.585149 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-03-13 00:47:40.585155 | orchestrator | Friday 13 March 2026 00:46:11 +0000 (0:00:01.213) 0:00:02.858 ********** 2026-03-13 00:47:40.585159 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:47:40.585165 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:47:40.585170 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:47:40.585175 | orchestrator | ok: [testbed-manager] 2026-03-13 00:47:40.585180 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:47:40.585185 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:47:40.585191 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:47:40.585196 | orchestrator | 2026-03-13 00:47:40.585200 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-03-13 00:47:40.585217 | orchestrator | Friday 13 March 2026 00:46:14 +0000 (0:00:02.538) 0:00:05.396 ********** 2026-03-13 00:47:40.585222 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:47:40.585227 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:47:40.585232 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:47:40.585238 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:47:40.585243 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:47:40.585263 | orchestrator | ok: [testbed-manager] 2026-03-13 00:47:40.585268 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:47:40.585274 | orchestrator | 2026-03-13 00:47:40.585279 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-03-13 00:47:40.585284 | orchestrator | Friday 13 March 2026 00:46:17 +0000 (0:00:03.552) 0:00:08.949 ********** 2026-03-13 00:47:40.585314 | orchestrator | changed: [testbed-manager] 2026-03-13 00:47:40.585321 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:47:40.585326 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:47:40.585331 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:47:40.585343 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:47:40.585348 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:47:40.585353 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:47:40.585358 | orchestrator | 2026-03-13 00:47:40.585369 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-03-13 00:47:40.585375 | orchestrator | Friday 13 March 2026 00:46:20 +0000 (0:00:02.509) 0:00:11.458 ********** 2026-03-13 00:47:40.585380 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:47:40.585385 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:47:40.585390 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:47:40.585395 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:47:40.585400 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:47:40.585410 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:47:40.585415 | orchestrator | changed: [testbed-manager] 2026-03-13 00:47:40.585421 | orchestrator | 2026-03-13 00:47:40.585425 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-03-13 00:47:40.585429 | orchestrator | Friday 13 March 2026 00:46:30 +0000 (0:00:10.213) 0:00:21.672 ********** 2026-03-13 00:47:40.585431 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:47:40.585435 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:47:40.585438 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:47:40.585446 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:47:40.585449 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:47:40.585452 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:47:40.585455 | orchestrator | changed: [testbed-manager] 2026-03-13 00:47:40.585462 | orchestrator | 2026-03-13 00:47:40.585465 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-03-13 00:47:40.585468 | orchestrator | Friday 13 March 2026 00:47:10 +0000 (0:00:40.241) 0:01:01.914 ********** 2026-03-13 00:47:40.585472 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-13 00:47:40.585475 | orchestrator | 2026-03-13 00:47:40.585478 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-03-13 00:47:40.585481 | orchestrator | Friday 13 March 2026 00:47:11 +0000 (0:00:01.324) 0:01:03.238 ********** 2026-03-13 00:47:40.585484 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-03-13 00:47:40.585488 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-03-13 00:47:40.585495 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-03-13 00:47:40.585498 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-03-13 00:47:40.585508 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-03-13 00:47:40.585512 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-03-13 00:47:40.585515 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-03-13 00:47:40.585518 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-03-13 00:47:40.585521 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-03-13 00:47:40.585524 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-03-13 00:47:40.585527 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-03-13 00:47:40.585530 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-03-13 00:47:40.585533 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-03-13 00:47:40.585536 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-03-13 00:47:40.585539 | orchestrator | 2026-03-13 00:47:40.585542 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-03-13 00:47:40.585546 | orchestrator | Friday 13 March 2026 00:47:16 +0000 (0:00:04.909) 0:01:08.147 ********** 2026-03-13 00:47:40.585549 | orchestrator | ok: [testbed-manager] 2026-03-13 00:47:40.585552 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:47:40.585555 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:47:40.585558 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:47:40.585561 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:47:40.585564 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:47:40.585567 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:47:40.585570 | orchestrator | 2026-03-13 00:47:40.585573 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-03-13 00:47:40.585576 | orchestrator | Friday 13 March 2026 00:47:18 +0000 (0:00:01.301) 0:01:09.449 ********** 2026-03-13 00:47:40.585579 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:47:40.585582 | orchestrator | changed: [testbed-manager] 2026-03-13 00:47:40.585585 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:47:40.585588 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:47:40.585594 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:47:40.585597 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:47:40.585600 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:47:40.585603 | orchestrator | 2026-03-13 00:47:40.585607 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-03-13 00:47:40.585610 | orchestrator | Friday 13 March 2026 00:47:19 +0000 (0:00:01.551) 0:01:11.000 ********** 2026-03-13 00:47:40.585613 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:47:40.585616 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:47:40.585619 | orchestrator | ok: [testbed-manager] 2026-03-13 00:47:40.585622 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:47:40.585626 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:47:40.585632 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:47:40.585636 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:47:40.585639 | orchestrator | 2026-03-13 00:47:40.585643 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-03-13 00:47:40.585646 | orchestrator | Friday 13 March 2026 00:47:21 +0000 (0:00:01.600) 0:01:12.600 ********** 2026-03-13 00:47:40.585650 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:47:40.585654 | orchestrator | ok: [testbed-manager] 2026-03-13 00:47:40.585657 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:47:40.585661 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:47:40.585664 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:47:40.585667 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:47:40.585671 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:47:40.585675 | orchestrator | 2026-03-13 00:47:40.585678 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-03-13 00:47:40.585682 | orchestrator | Friday 13 March 2026 00:47:23 +0000 (0:00:02.154) 0:01:14.755 ********** 2026-03-13 00:47:40.585685 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-03-13 00:47:40.585690 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-13 00:47:40.585694 | orchestrator | 2026-03-13 00:47:40.585697 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-03-13 00:47:40.585701 | orchestrator | Friday 13 March 2026 00:47:24 +0000 (0:00:01.268) 0:01:16.024 ********** 2026-03-13 00:47:40.585704 | orchestrator | changed: [testbed-manager] 2026-03-13 00:47:40.585708 | orchestrator | 2026-03-13 00:47:40.585711 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-03-13 00:47:40.585715 | orchestrator | Friday 13 March 2026 00:47:26 +0000 (0:00:01.963) 0:01:17.988 ********** 2026-03-13 00:47:40.585718 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:47:40.585722 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:47:40.585725 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:47:40.585729 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:47:40.585732 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:47:40.585736 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:47:40.585739 | orchestrator | changed: [testbed-manager] 2026-03-13 00:47:40.585743 | orchestrator | 2026-03-13 00:47:40.585746 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 00:47:40.585750 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 00:47:40.585754 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 00:47:40.585757 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 00:47:40.585761 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 00:47:40.585769 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 00:47:40.585773 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 00:47:40.585776 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 00:47:40.585780 | orchestrator | 2026-03-13 00:47:40.585783 | orchestrator | 2026-03-13 00:47:40.585787 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 00:47:40.585790 | orchestrator | Friday 13 March 2026 00:47:37 +0000 (0:00:11.054) 0:01:29.042 ********** 2026-03-13 00:47:40.585794 | orchestrator | =============================================================================== 2026-03-13 00:47:40.585797 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 40.24s 2026-03-13 00:47:40.585801 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.05s 2026-03-13 00:47:40.585805 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.21s 2026-03-13 00:47:40.585808 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 4.91s 2026-03-13 00:47:40.585811 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.55s 2026-03-13 00:47:40.585815 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.54s 2026-03-13 00:47:40.585818 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.51s 2026-03-13 00:47:40.585822 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.15s 2026-03-13 00:47:40.585826 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.96s 2026-03-13 00:47:40.585829 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.60s 2026-03-13 00:47:40.585832 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.55s 2026-03-13 00:47:40.585836 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.32s 2026-03-13 00:47:40.585839 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.30s 2026-03-13 00:47:40.585844 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.27s 2026-03-13 00:47:40.585848 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.21s 2026-03-13 00:47:40.585852 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.21s 2026-03-13 00:47:40.585855 | orchestrator | 2026-03-13 00:47:40 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:47:40.585860 | orchestrator | 2026-03-13 00:47:40 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:47:43.623474 | orchestrator | 2026-03-13 00:47:43 | INFO  | Task b42c32fc-91a3-4dbd-b76a-1be951fea01c is in state STARTED 2026-03-13 00:47:43.624856 | orchestrator | 2026-03-13 00:47:43 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:47:43.625459 | orchestrator | 2026-03-13 00:47:43 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:47:43.625481 | orchestrator | 2026-03-13 00:47:43 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:47:46.673973 | orchestrator | 2026-03-13 00:47:46 | INFO  | Task b42c32fc-91a3-4dbd-b76a-1be951fea01c is in state STARTED 2026-03-13 00:47:46.675524 | orchestrator | 2026-03-13 00:47:46 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:47:46.677144 | orchestrator | 2026-03-13 00:47:46 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:47:46.677186 | orchestrator | 2026-03-13 00:47:46 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:47:49.725808 | orchestrator | 2026-03-13 00:47:49 | INFO  | Task b42c32fc-91a3-4dbd-b76a-1be951fea01c is in state STARTED 2026-03-13 00:47:49.727044 | orchestrator | 2026-03-13 00:47:49 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:47:49.727803 | orchestrator | 2026-03-13 00:47:49 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:47:49.727827 | orchestrator | 2026-03-13 00:47:49 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:47:52.766408 | orchestrator | 2026-03-13 00:47:52 | INFO  | Task b42c32fc-91a3-4dbd-b76a-1be951fea01c is in state STARTED 2026-03-13 00:47:52.766913 | orchestrator | 2026-03-13 00:47:52 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:47:52.768803 | orchestrator | 2026-03-13 00:47:52 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:47:52.768860 | orchestrator | 2026-03-13 00:47:52 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:47:55.805599 | orchestrator | 2026-03-13 00:47:55 | INFO  | Task b42c32fc-91a3-4dbd-b76a-1be951fea01c is in state STARTED 2026-03-13 00:47:55.808444 | orchestrator | 2026-03-13 00:47:55 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:47:55.809080 | orchestrator | 2026-03-13 00:47:55 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:47:55.809122 | orchestrator | 2026-03-13 00:47:55 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:47:58.849442 | orchestrator | 2026-03-13 00:47:58 | INFO  | Task b42c32fc-91a3-4dbd-b76a-1be951fea01c is in state STARTED 2026-03-13 00:47:58.849543 | orchestrator | 2026-03-13 00:47:58 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:47:58.850396 | orchestrator | 2026-03-13 00:47:58 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:47:58.850451 | orchestrator | 2026-03-13 00:47:58 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:48:01.886620 | orchestrator | 2026-03-13 00:48:01 | INFO  | Task b42c32fc-91a3-4dbd-b76a-1be951fea01c is in state STARTED 2026-03-13 00:48:01.888960 | orchestrator | 2026-03-13 00:48:01 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:48:01.890592 | orchestrator | 2026-03-13 00:48:01 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:48:01.890730 | orchestrator | 2026-03-13 00:48:01 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:48:04.916366 | orchestrator | 2026-03-13 00:48:04 | INFO  | Task b42c32fc-91a3-4dbd-b76a-1be951fea01c is in state STARTED 2026-03-13 00:48:04.916442 | orchestrator | 2026-03-13 00:48:04 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:48:04.916978 | orchestrator | 2026-03-13 00:48:04 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:48:04.917127 | orchestrator | 2026-03-13 00:48:04 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:48:07.965385 | orchestrator | 2026-03-13 00:48:07 | INFO  | Task b42c32fc-91a3-4dbd-b76a-1be951fea01c is in state SUCCESS 2026-03-13 00:48:07.966435 | orchestrator | 2026-03-13 00:48:07.966474 | orchestrator | 2026-03-13 00:48:07.966485 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-03-13 00:48:07.966537 | orchestrator | 2026-03-13 00:48:07.966544 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-13 00:48:07.966549 | orchestrator | Friday 13 March 2026 00:46:01 +0000 (0:00:00.256) 0:00:00.256 ********** 2026-03-13 00:48:07.966554 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-13 00:48:07.966572 | orchestrator | 2026-03-13 00:48:07.966582 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-03-13 00:48:07.966589 | orchestrator | Friday 13 March 2026 00:46:02 +0000 (0:00:01.201) 0:00:01.458 ********** 2026-03-13 00:48:07.966595 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-13 00:48:07.966602 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-13 00:48:07.966609 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-13 00:48:07.966616 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-13 00:48:07.966622 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-13 00:48:07.966626 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-13 00:48:07.966629 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-13 00:48:07.966633 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-13 00:48:07.966637 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-13 00:48:07.966641 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-13 00:48:07.966645 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-13 00:48:07.966649 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-13 00:48:07.966652 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-13 00:48:07.966656 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-13 00:48:07.966660 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-13 00:48:07.966666 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-13 00:48:07.966671 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-13 00:48:07.966675 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-13 00:48:07.966981 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-13 00:48:07.966987 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-13 00:48:07.966990 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-13 00:48:07.966994 | orchestrator | 2026-03-13 00:48:07.966998 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-13 00:48:07.967002 | orchestrator | Friday 13 March 2026 00:46:06 +0000 (0:00:03.892) 0:00:05.351 ********** 2026-03-13 00:48:07.967006 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-13 00:48:07.967011 | orchestrator | 2026-03-13 00:48:07.967015 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-03-13 00:48:07.967019 | orchestrator | Friday 13 March 2026 00:46:07 +0000 (0:00:01.306) 0:00:06.657 ********** 2026-03-13 00:48:07.967025 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-13 00:48:07.967036 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-13 00:48:07.967050 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-13 00:48:07.967055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-13 00:48:07.967059 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:48:07.967063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:48:07.967067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:48:07.967071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:48:07.967080 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-13 00:48:07.967091 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-13 00:48:07.967098 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-13 00:48:07.967105 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:48:07.967118 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:48:07.967123 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:48:07.967127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:48:07.967131 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:48:07.967138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:48:07.967158 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:48:07.967166 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:48:07.967176 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:48:07.967184 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:48:07.967190 | orchestrator | 2026-03-13 00:48:07.967197 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-03-13 00:48:07.967203 | orchestrator | Friday 13 March 2026 00:46:14 +0000 (0:00:06.355) 0:00:13.013 ********** 2026-03-13 00:48:07.967210 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-13 00:48:07.967217 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 00:48:07.967228 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 00:48:07.967235 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:48:07.967240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-13 00:48:07.967275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 00:48:07.967280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 00:48:07.967284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-13 00:48:07.967288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 00:48:07.967292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 00:48:07.967296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-13 00:48:07.967365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 00:48:07.967371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 00:48:07.967375 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:48:07.967379 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:48:07.967382 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:48:07.967391 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-13 00:48:07.967396 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 00:48:07.967400 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 00:48:07.967404 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:48:07.967407 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-13 00:48:07.967411 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 00:48:07.967720 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 00:48:07.967728 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:48:07.967732 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-13 00:48:07.967743 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 00:48:07.967748 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 00:48:07.967752 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:48:07.967756 | orchestrator | 2026-03-13 00:48:07.967760 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-03-13 00:48:07.967764 | orchestrator | Friday 13 March 2026 00:46:15 +0000 (0:00:01.399) 0:00:14.412 ********** 2026-03-13 00:48:07.967767 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-13 00:48:07.967771 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 00:48:07.967778 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 00:48:07.967782 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:48:07.967786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-13 00:48:07.967790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 00:48:07.967794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 00:48:07.967797 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:48:07.967809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-13 00:48:07.967819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 00:48:07.967823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 00:48:07.967827 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:48:07.967831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-13 00:48:07.967838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 00:48:07.967842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 00:48:07.967846 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-13 00:48:07.967854 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 00:48:07.967858 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 00:48:07.967862 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:48:07.967866 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:48:07.967870 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-13 00:48:07.967874 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 00:48:07.967880 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 00:48:07.967883 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:48:07.967887 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-13 00:48:07.967891 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 00:48:07.967896 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 00:48:07.967899 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:48:07.967903 | orchestrator | 2026-03-13 00:48:07.967907 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-03-13 00:48:07.967911 | orchestrator | Friday 13 March 2026 00:46:18 +0000 (0:00:03.393) 0:00:17.806 ********** 2026-03-13 00:48:07.967915 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:48:07.967919 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:48:07.967922 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:48:07.967926 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:48:07.967930 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:48:07.967936 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:48:07.967940 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:48:07.967943 | orchestrator | 2026-03-13 00:48:07.967971 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-03-13 00:48:07.967991 | orchestrator | Friday 13 March 2026 00:46:20 +0000 (0:00:01.841) 0:00:19.647 ********** 2026-03-13 00:48:07.967996 | orchestrator | skipping: [testbed-manager] 2026-03-13 00:48:07.968000 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:48:07.968003 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:48:07.968007 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:48:07.968011 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:48:07.968014 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:48:07.968018 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:48:07.968024 | orchestrator | 2026-03-13 00:48:07.968028 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-03-13 00:48:07.968032 | orchestrator | Friday 13 March 2026 00:46:23 +0000 (0:00:02.606) 0:00:22.254 ********** 2026-03-13 00:48:07.968038 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-13 00:48:07.968042 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-13 00:48:07.968046 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:48:07.968050 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-13 00:48:07.968054 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-13 00:48:07.968058 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-13 00:48:07.968068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-13 00:48:07.968075 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:48:07.968079 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-13 00:48:07.968083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:48:07.968087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:48:07.968091 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:48:07.968095 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:48:07.968103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:48:07.968111 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:48:07.968116 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:48:07.968122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:48:07.968132 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:48:07.968139 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:48:07.968145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:48:07.968152 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:48:07.968159 | orchestrator | 2026-03-13 00:48:07.968165 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-03-13 00:48:07.968171 | orchestrator | Friday 13 March 2026 00:46:29 +0000 (0:00:05.762) 0:00:28.017 ********** 2026-03-13 00:48:07.968178 | orchestrator | [WARNING]: Skipped 2026-03-13 00:48:07.968183 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-03-13 00:48:07.968187 | orchestrator | to this access issue: 2026-03-13 00:48:07.968191 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-03-13 00:48:07.968198 | orchestrator | directory 2026-03-13 00:48:07.968202 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-13 00:48:07.968205 | orchestrator | 2026-03-13 00:48:07.968209 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-03-13 00:48:07.968213 | orchestrator | Friday 13 March 2026 00:46:30 +0000 (0:00:01.790) 0:00:29.808 ********** 2026-03-13 00:48:07.968216 | orchestrator | [WARNING]: Skipped 2026-03-13 00:48:07.968224 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-03-13 00:48:07.968231 | orchestrator | to this access issue: 2026-03-13 00:48:07.968235 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-03-13 00:48:07.968238 | orchestrator | directory 2026-03-13 00:48:07.968242 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-13 00:48:07.968246 | orchestrator | 2026-03-13 00:48:07.968260 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-03-13 00:48:07.968267 | orchestrator | Friday 13 March 2026 00:46:31 +0000 (0:00:00.776) 0:00:30.585 ********** 2026-03-13 00:48:07.968274 | orchestrator | [WARNING]: Skipped 2026-03-13 00:48:07.968278 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-03-13 00:48:07.968282 | orchestrator | to this access issue: 2026-03-13 00:48:07.968286 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-03-13 00:48:07.968289 | orchestrator | directory 2026-03-13 00:48:07.968293 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-13 00:48:07.968297 | orchestrator | 2026-03-13 00:48:07.968300 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-03-13 00:48:07.968304 | orchestrator | Friday 13 March 2026 00:46:32 +0000 (0:00:00.734) 0:00:31.319 ********** 2026-03-13 00:48:07.968308 | orchestrator | [WARNING]: Skipped 2026-03-13 00:48:07.968312 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-03-13 00:48:07.968315 | orchestrator | to this access issue: 2026-03-13 00:48:07.968319 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-03-13 00:48:07.968323 | orchestrator | directory 2026-03-13 00:48:07.968327 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-13 00:48:07.968332 | orchestrator | 2026-03-13 00:48:07.968336 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-03-13 00:48:07.968341 | orchestrator | Friday 13 March 2026 00:46:33 +0000 (0:00:00.751) 0:00:32.070 ********** 2026-03-13 00:48:07.968345 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:48:07.968349 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:48:07.968354 | orchestrator | changed: [testbed-manager] 2026-03-13 00:48:07.968358 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:48:07.968363 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:48:07.968367 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:48:07.968371 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:48:07.968376 | orchestrator | 2026-03-13 00:48:07.968380 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-03-13 00:48:07.968385 | orchestrator | Friday 13 March 2026 00:46:37 +0000 (0:00:04.209) 0:00:36.279 ********** 2026-03-13 00:48:07.968389 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-13 00:48:07.968394 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-13 00:48:07.968399 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-13 00:48:07.968403 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-13 00:48:07.968407 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-13 00:48:07.968410 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-13 00:48:07.968417 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-13 00:48:07.968421 | orchestrator | 2026-03-13 00:48:07.968424 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-03-13 00:48:07.968428 | orchestrator | Friday 13 March 2026 00:46:40 +0000 (0:00:03.415) 0:00:39.695 ********** 2026-03-13 00:48:07.968432 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:48:07.968436 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:48:07.968439 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:48:07.968443 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:48:07.968447 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:48:07.968450 | orchestrator | changed: [testbed-manager] 2026-03-13 00:48:07.968454 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:48:07.968458 | orchestrator | 2026-03-13 00:48:07.968461 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-03-13 00:48:07.968465 | orchestrator | Friday 13 March 2026 00:46:43 +0000 (0:00:03.134) 0:00:42.829 ********** 2026-03-13 00:48:07.968469 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-13 00:48:07.968490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 00:48:07.968499 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-13 00:48:07.968506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 00:48:07.968511 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-13 00:48:07.968515 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 00:48:07.968522 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:48:07.968526 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-13 00:48:07.968530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 00:48:07.968538 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:48:07.968542 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-13 00:48:07.968546 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 00:48:07.968550 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:48:07.968556 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-13 00:48:07.968560 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 00:48:07.968564 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-13 00:48:07.968568 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 00:48:07.968576 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:48:07.968580 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:48:07.968584 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:48:07.968588 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:48:07.968594 | orchestrator | 2026-03-13 00:48:07.968598 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-03-13 00:48:07.968602 | orchestrator | Friday 13 March 2026 00:46:46 +0000 (0:00:02.914) 0:00:45.743 ********** 2026-03-13 00:48:07.968606 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-13 00:48:07.968610 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-13 00:48:07.968613 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-13 00:48:07.968617 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-13 00:48:07.968621 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-13 00:48:07.968625 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-13 00:48:07.968628 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-13 00:48:07.968632 | orchestrator | 2026-03-13 00:48:07.968636 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-03-13 00:48:07.968639 | orchestrator | Friday 13 March 2026 00:46:50 +0000 (0:00:03.457) 0:00:49.200 ********** 2026-03-13 00:48:07.968643 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-13 00:48:07.968647 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-13 00:48:07.968651 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-13 00:48:07.968654 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-13 00:48:07.968658 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-13 00:48:07.968662 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-13 00:48:07.968669 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-13 00:48:07.968675 | orchestrator | 2026-03-13 00:48:07.968681 | orchestrator | TASK [common : Check common containers] **************************************** 2026-03-13 00:48:07.968687 | orchestrator | Friday 13 March 2026 00:46:52 +0000 (0:00:02.474) 0:00:51.675 ********** 2026-03-13 00:48:07.968694 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-13 00:48:07.968706 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-13 00:48:07.968713 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-13 00:48:07.968725 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-13 00:48:07.968729 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-13 00:48:07.968733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-13 00:48:07.968737 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:48:07.968741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:48:07.968751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:48:07.968755 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:48:07.968762 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-13 00:48:07.968768 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:48:07.968772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:48:07.968776 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:48:07.968780 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:48:07.968784 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:48:07.968792 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:48:07.968799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:48:07.968803 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:48:07.968806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:48:07.968810 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:48:07.968814 | orchestrator | 2026-03-13 00:48:07.968818 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-03-13 00:48:07.968822 | orchestrator | Friday 13 March 2026 00:46:56 +0000 (0:00:04.182) 0:00:55.858 ********** 2026-03-13 00:48:07.968826 | orchestrator | changed: [testbed-manager] 2026-03-13 00:48:07.968829 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:48:07.968833 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:48:07.968837 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:48:07.968841 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:48:07.968844 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:48:07.968848 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:48:07.968852 | orchestrator | 2026-03-13 00:48:07.968855 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-03-13 00:48:07.968859 | orchestrator | Friday 13 March 2026 00:46:58 +0000 (0:00:01.582) 0:00:57.440 ********** 2026-03-13 00:48:07.968863 | orchestrator | changed: [testbed-manager] 2026-03-13 00:48:07.968867 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:48:07.968870 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:48:07.968874 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:48:07.968878 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:48:07.968882 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:48:07.968885 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:48:07.968889 | orchestrator | 2026-03-13 00:48:07.968893 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-13 00:48:07.968897 | orchestrator | Friday 13 March 2026 00:46:59 +0000 (0:00:01.206) 0:00:58.646 ********** 2026-03-13 00:48:07.968900 | orchestrator | 2026-03-13 00:48:07.968904 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-13 00:48:07.968908 | orchestrator | Friday 13 March 2026 00:46:59 +0000 (0:00:00.061) 0:00:58.708 ********** 2026-03-13 00:48:07.968914 | orchestrator | 2026-03-13 00:48:07.968921 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-13 00:48:07.968927 | orchestrator | Friday 13 March 2026 00:46:59 +0000 (0:00:00.058) 0:00:58.767 ********** 2026-03-13 00:48:07.968936 | orchestrator | 2026-03-13 00:48:07.968940 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-13 00:48:07.968944 | orchestrator | Friday 13 March 2026 00:46:59 +0000 (0:00:00.166) 0:00:58.933 ********** 2026-03-13 00:48:07.968950 | orchestrator | 2026-03-13 00:48:07.968959 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-13 00:48:07.968967 | orchestrator | Friday 13 March 2026 00:46:59 +0000 (0:00:00.058) 0:00:58.991 ********** 2026-03-13 00:48:07.968973 | orchestrator | 2026-03-13 00:48:07.968979 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-13 00:48:07.968985 | orchestrator | Friday 13 March 2026 00:47:00 +0000 (0:00:00.058) 0:00:59.049 ********** 2026-03-13 00:48:07.968991 | orchestrator | 2026-03-13 00:48:07.968997 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-13 00:48:07.969003 | orchestrator | Friday 13 March 2026 00:47:00 +0000 (0:00:00.062) 0:00:59.111 ********** 2026-03-13 00:48:07.969009 | orchestrator | 2026-03-13 00:48:07.969016 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-03-13 00:48:07.969030 | orchestrator | Friday 13 March 2026 00:47:00 +0000 (0:00:00.079) 0:00:59.191 ********** 2026-03-13 00:48:07.969037 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:48:07.969044 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:48:07.969048 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:48:07.969052 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:48:07.969055 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:48:07.969059 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:48:07.969063 | orchestrator | changed: [testbed-manager] 2026-03-13 00:48:07.969066 | orchestrator | 2026-03-13 00:48:07.969070 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-03-13 00:48:07.969074 | orchestrator | Friday 13 March 2026 00:47:30 +0000 (0:00:29.859) 0:01:29.050 ********** 2026-03-13 00:48:07.969078 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:48:07.969081 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:48:07.969085 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:48:07.969089 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:48:07.969092 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:48:07.969096 | orchestrator | changed: [testbed-manager] 2026-03-13 00:48:07.969100 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:48:07.969103 | orchestrator | 2026-03-13 00:48:07.969107 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-03-13 00:48:07.969111 | orchestrator | Friday 13 March 2026 00:48:00 +0000 (0:00:30.363) 0:01:59.414 ********** 2026-03-13 00:48:07.969115 | orchestrator | ok: [testbed-manager] 2026-03-13 00:48:07.969119 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:48:07.969122 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:48:07.969126 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:48:07.969130 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:48:07.969133 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:48:07.969137 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:48:07.969141 | orchestrator | 2026-03-13 00:48:07.969145 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-03-13 00:48:07.969148 | orchestrator | Friday 13 March 2026 00:48:02 +0000 (0:00:01.905) 0:02:01.320 ********** 2026-03-13 00:48:07.969152 | orchestrator | changed: [testbed-manager] 2026-03-13 00:48:07.969156 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:48:07.969160 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:48:07.969163 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:48:07.969167 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:48:07.969171 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:48:07.969174 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:48:07.969178 | orchestrator | 2026-03-13 00:48:07.969182 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 00:48:07.969186 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-13 00:48:07.969193 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-13 00:48:07.969197 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-13 00:48:07.969201 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-13 00:48:07.969205 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-13 00:48:07.969209 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-13 00:48:07.969212 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-13 00:48:07.969216 | orchestrator | 2026-03-13 00:48:07.969220 | orchestrator | 2026-03-13 00:48:07.969224 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 00:48:07.969227 | orchestrator | Friday 13 March 2026 00:48:06 +0000 (0:00:04.446) 0:02:05.766 ********** 2026-03-13 00:48:07.969231 | orchestrator | =============================================================================== 2026-03-13 00:48:07.969235 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 30.36s 2026-03-13 00:48:07.969283 | orchestrator | common : Restart fluentd container ------------------------------------- 29.86s 2026-03-13 00:48:07.969293 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 6.36s 2026-03-13 00:48:07.969299 | orchestrator | common : Copying over config.json files for services -------------------- 5.76s 2026-03-13 00:48:07.969306 | orchestrator | common : Restart cron container ----------------------------------------- 4.45s 2026-03-13 00:48:07.969312 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.21s 2026-03-13 00:48:07.969318 | orchestrator | common : Check common containers ---------------------------------------- 4.18s 2026-03-13 00:48:07.969322 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.89s 2026-03-13 00:48:07.969326 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.46s 2026-03-13 00:48:07.969329 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.42s 2026-03-13 00:48:07.969333 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.39s 2026-03-13 00:48:07.969337 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.13s 2026-03-13 00:48:07.969340 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.91s 2026-03-13 00:48:07.969347 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 2.61s 2026-03-13 00:48:07.969354 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.47s 2026-03-13 00:48:07.969358 | orchestrator | common : Initializing toolbox container using normal user --------------- 1.91s 2026-03-13 00:48:07.969361 | orchestrator | common : Copying over /run subdirectories conf -------------------------- 1.84s 2026-03-13 00:48:07.969365 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.79s 2026-03-13 00:48:07.969369 | orchestrator | common : Creating log volume -------------------------------------------- 1.58s 2026-03-13 00:48:07.969372 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.40s 2026-03-13 00:48:07.970832 | orchestrator | 2026-03-13 00:48:07 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:48:07.971391 | orchestrator | 2026-03-13 00:48:07 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:48:07.971427 | orchestrator | 2026-03-13 00:48:07 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:48:10.993338 | orchestrator | 2026-03-13 00:48:10 | INFO  | Task b6f8ffb3-89f6-4a8d-b241-4041a03f2221 is in state STARTED 2026-03-13 00:48:10.995736 | orchestrator | 2026-03-13 00:48:10 | INFO  | Task 82622195-3605-41c1-b170-8b2daa6eac88 is in state STARTED 2026-03-13 00:48:10.996059 | orchestrator | 2026-03-13 00:48:10 | INFO  | Task 739113a6-104a-4678-a855-46a9cb89b864 is in state STARTED 2026-03-13 00:48:10.996844 | orchestrator | 2026-03-13 00:48:10 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:48:10.997424 | orchestrator | 2026-03-13 00:48:10 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:48:10.998101 | orchestrator | 2026-03-13 00:48:10 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:48:10.998846 | orchestrator | 2026-03-13 00:48:10 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:48:14.023017 | orchestrator | 2026-03-13 00:48:14 | INFO  | Task b6f8ffb3-89f6-4a8d-b241-4041a03f2221 is in state STARTED 2026-03-13 00:48:14.023228 | orchestrator | 2026-03-13 00:48:14 | INFO  | Task 82622195-3605-41c1-b170-8b2daa6eac88 is in state STARTED 2026-03-13 00:48:14.025757 | orchestrator | 2026-03-13 00:48:14 | INFO  | Task 739113a6-104a-4678-a855-46a9cb89b864 is in state STARTED 2026-03-13 00:48:14.026518 | orchestrator | 2026-03-13 00:48:14 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:48:14.027000 | orchestrator | 2026-03-13 00:48:14 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:48:14.027980 | orchestrator | 2026-03-13 00:48:14 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:48:14.028016 | orchestrator | 2026-03-13 00:48:14 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:48:17.052031 | orchestrator | 2026-03-13 00:48:17 | INFO  | Task b6f8ffb3-89f6-4a8d-b241-4041a03f2221 is in state STARTED 2026-03-13 00:48:17.053626 | orchestrator | 2026-03-13 00:48:17 | INFO  | Task 82622195-3605-41c1-b170-8b2daa6eac88 is in state STARTED 2026-03-13 00:48:17.055029 | orchestrator | 2026-03-13 00:48:17 | INFO  | Task 739113a6-104a-4678-a855-46a9cb89b864 is in state STARTED 2026-03-13 00:48:17.056806 | orchestrator | 2026-03-13 00:48:17 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:48:17.058442 | orchestrator | 2026-03-13 00:48:17 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:48:17.058970 | orchestrator | 2026-03-13 00:48:17 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:48:17.059144 | orchestrator | 2026-03-13 00:48:17 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:48:20.093939 | orchestrator | 2026-03-13 00:48:20 | INFO  | Task b6f8ffb3-89f6-4a8d-b241-4041a03f2221 is in state STARTED 2026-03-13 00:48:20.095555 | orchestrator | 2026-03-13 00:48:20 | INFO  | Task 82622195-3605-41c1-b170-8b2daa6eac88 is in state STARTED 2026-03-13 00:48:20.096793 | orchestrator | 2026-03-13 00:48:20 | INFO  | Task 739113a6-104a-4678-a855-46a9cb89b864 is in state STARTED 2026-03-13 00:48:20.098231 | orchestrator | 2026-03-13 00:48:20 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:48:20.099314 | orchestrator | 2026-03-13 00:48:20 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:48:20.100845 | orchestrator | 2026-03-13 00:48:20 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:48:20.100918 | orchestrator | 2026-03-13 00:48:20 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:48:23.134049 | orchestrator | 2026-03-13 00:48:23 | INFO  | Task bf20fdea-193b-4caa-ab99-bd80708e3f64 is in state STARTED 2026-03-13 00:48:23.134130 | orchestrator | 2026-03-13 00:48:23 | INFO  | Task b6f8ffb3-89f6-4a8d-b241-4041a03f2221 is in state SUCCESS 2026-03-13 00:48:23.134760 | orchestrator | 2026-03-13 00:48:23 | INFO  | Task 82622195-3605-41c1-b170-8b2daa6eac88 is in state STARTED 2026-03-13 00:48:23.135600 | orchestrator | 2026-03-13 00:48:23 | INFO  | Task 739113a6-104a-4678-a855-46a9cb89b864 is in state STARTED 2026-03-13 00:48:23.136012 | orchestrator | 2026-03-13 00:48:23 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:48:23.136947 | orchestrator | 2026-03-13 00:48:23 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:48:23.139674 | orchestrator | 2026-03-13 00:48:23 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:48:23.139719 | orchestrator | 2026-03-13 00:48:23 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:48:26.165984 | orchestrator | 2026-03-13 00:48:26 | INFO  | Task bf20fdea-193b-4caa-ab99-bd80708e3f64 is in state STARTED 2026-03-13 00:48:26.166999 | orchestrator | 2026-03-13 00:48:26 | INFO  | Task 82622195-3605-41c1-b170-8b2daa6eac88 is in state STARTED 2026-03-13 00:48:26.168137 | orchestrator | 2026-03-13 00:48:26 | INFO  | Task 739113a6-104a-4678-a855-46a9cb89b864 is in state STARTED 2026-03-13 00:48:26.169288 | orchestrator | 2026-03-13 00:48:26 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:48:26.170898 | orchestrator | 2026-03-13 00:48:26 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:48:26.172194 | orchestrator | 2026-03-13 00:48:26 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:48:26.172392 | orchestrator | 2026-03-13 00:48:26 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:48:29.199013 | orchestrator | 2026-03-13 00:48:29 | INFO  | Task bf20fdea-193b-4caa-ab99-bd80708e3f64 is in state STARTED 2026-03-13 00:48:29.201493 | orchestrator | 2026-03-13 00:48:29 | INFO  | Task 82622195-3605-41c1-b170-8b2daa6eac88 is in state STARTED 2026-03-13 00:48:29.202149 | orchestrator | 2026-03-13 00:48:29 | INFO  | Task 739113a6-104a-4678-a855-46a9cb89b864 is in state STARTED 2026-03-13 00:48:29.203198 | orchestrator | 2026-03-13 00:48:29 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:48:29.204600 | orchestrator | 2026-03-13 00:48:29 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:48:29.206681 | orchestrator | 2026-03-13 00:48:29 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:48:29.206728 | orchestrator | 2026-03-13 00:48:29 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:48:32.228889 | orchestrator | 2026-03-13 00:48:32 | INFO  | Task bf20fdea-193b-4caa-ab99-bd80708e3f64 is in state STARTED 2026-03-13 00:48:32.230189 | orchestrator | 2026-03-13 00:48:32 | INFO  | Task 82622195-3605-41c1-b170-8b2daa6eac88 is in state STARTED 2026-03-13 00:48:32.230940 | orchestrator | 2026-03-13 00:48:32 | INFO  | Task 739113a6-104a-4678-a855-46a9cb89b864 is in state STARTED 2026-03-13 00:48:32.231507 | orchestrator | 2026-03-13 00:48:32 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:48:32.232795 | orchestrator | 2026-03-13 00:48:32 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:48:32.233467 | orchestrator | 2026-03-13 00:48:32 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:48:32.233865 | orchestrator | 2026-03-13 00:48:32 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:48:35.309997 | orchestrator | 2026-03-13 00:48:35 | INFO  | Task bf20fdea-193b-4caa-ab99-bd80708e3f64 is in state STARTED 2026-03-13 00:48:35.313552 | orchestrator | 2026-03-13 00:48:35 | INFO  | Task 82622195-3605-41c1-b170-8b2daa6eac88 is in state STARTED 2026-03-13 00:48:35.313645 | orchestrator | 2026-03-13 00:48:35 | INFO  | Task 739113a6-104a-4678-a855-46a9cb89b864 is in state STARTED 2026-03-13 00:48:35.313651 | orchestrator | 2026-03-13 00:48:35 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:48:35.313656 | orchestrator | 2026-03-13 00:48:35 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:48:35.313999 | orchestrator | 2026-03-13 00:48:35 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:48:35.314047 | orchestrator | 2026-03-13 00:48:35 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:48:38.508479 | orchestrator | 2026-03-13 00:48:38 | INFO  | Task bf20fdea-193b-4caa-ab99-bd80708e3f64 is in state STARTED 2026-03-13 00:48:38.510368 | orchestrator | 2026-03-13 00:48:38 | INFO  | Task 82622195-3605-41c1-b170-8b2daa6eac88 is in state STARTED 2026-03-13 00:48:38.511093 | orchestrator | 2026-03-13 00:48:38 | INFO  | Task 739113a6-104a-4678-a855-46a9cb89b864 is in state SUCCESS 2026-03-13 00:48:38.513083 | orchestrator | 2026-03-13 00:48:38.513134 | orchestrator | 2026-03-13 00:48:38.513141 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-13 00:48:38.513146 | orchestrator | 2026-03-13 00:48:38.513150 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-13 00:48:38.513167 | orchestrator | Friday 13 March 2026 00:48:11 +0000 (0:00:00.224) 0:00:00.224 ********** 2026-03-13 00:48:38.513178 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:48:38.513183 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:48:38.513187 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:48:38.513191 | orchestrator | 2026-03-13 00:48:38.513196 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-13 00:48:38.513200 | orchestrator | Friday 13 March 2026 00:48:11 +0000 (0:00:00.257) 0:00:00.482 ********** 2026-03-13 00:48:38.513204 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-03-13 00:48:38.513209 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-03-13 00:48:38.513213 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-03-13 00:48:38.513216 | orchestrator | 2026-03-13 00:48:38.513220 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-03-13 00:48:38.513224 | orchestrator | 2026-03-13 00:48:38.513228 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-03-13 00:48:38.513231 | orchestrator | Friday 13 March 2026 00:48:12 +0000 (0:00:00.408) 0:00:00.891 ********** 2026-03-13 00:48:38.513236 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:48:38.513240 | orchestrator | 2026-03-13 00:48:38.513244 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-03-13 00:48:38.513247 | orchestrator | Friday 13 March 2026 00:48:12 +0000 (0:00:00.404) 0:00:01.295 ********** 2026-03-13 00:48:38.513251 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-13 00:48:38.513293 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-13 00:48:38.513298 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-13 00:48:38.513302 | orchestrator | 2026-03-13 00:48:38.513305 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-03-13 00:48:38.513309 | orchestrator | Friday 13 March 2026 00:48:13 +0000 (0:00:00.710) 0:00:02.006 ********** 2026-03-13 00:48:38.513332 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-13 00:48:38.513338 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-13 00:48:38.513344 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-13 00:48:38.513350 | orchestrator | 2026-03-13 00:48:38.513358 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-03-13 00:48:38.513364 | orchestrator | Friday 13 March 2026 00:48:15 +0000 (0:00:02.193) 0:00:04.200 ********** 2026-03-13 00:48:38.513370 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:48:38.513376 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:48:38.513382 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:48:38.513389 | orchestrator | 2026-03-13 00:48:38.513395 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-03-13 00:48:38.513401 | orchestrator | Friday 13 March 2026 00:48:17 +0000 (0:00:01.714) 0:00:05.914 ********** 2026-03-13 00:48:38.513407 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:48:38.513413 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:48:38.513419 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:48:38.513426 | orchestrator | 2026-03-13 00:48:38.513431 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 00:48:38.513438 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 00:48:38.513446 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 00:48:38.513452 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 00:48:38.513457 | orchestrator | 2026-03-13 00:48:38.513463 | orchestrator | 2026-03-13 00:48:38.513480 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 00:48:38.513495 | orchestrator | Friday 13 March 2026 00:48:20 +0000 (0:00:03.004) 0:00:08.919 ********** 2026-03-13 00:48:38.513501 | orchestrator | =============================================================================== 2026-03-13 00:48:38.513505 | orchestrator | memcached : Restart memcached container --------------------------------- 3.00s 2026-03-13 00:48:38.513509 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.19s 2026-03-13 00:48:38.513513 | orchestrator | memcached : Check memcached container ----------------------------------- 1.71s 2026-03-13 00:48:38.513517 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.71s 2026-03-13 00:48:38.513521 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.41s 2026-03-13 00:48:38.513525 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.41s 2026-03-13 00:48:38.513540 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.26s 2026-03-13 00:48:38.513544 | orchestrator | 2026-03-13 00:48:38.513547 | orchestrator | 2026-03-13 00:48:38.513551 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-13 00:48:38.513555 | orchestrator | 2026-03-13 00:48:38.513558 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-13 00:48:38.513562 | orchestrator | Friday 13 March 2026 00:48:12 +0000 (0:00:00.233) 0:00:00.233 ********** 2026-03-13 00:48:38.513566 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:48:38.513569 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:48:38.513573 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:48:38.513577 | orchestrator | 2026-03-13 00:48:38.513580 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-13 00:48:38.513596 | orchestrator | Friday 13 March 2026 00:48:12 +0000 (0:00:00.265) 0:00:00.498 ********** 2026-03-13 00:48:38.513600 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-03-13 00:48:38.513604 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-03-13 00:48:38.513608 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-03-13 00:48:38.513617 | orchestrator | 2026-03-13 00:48:38.513621 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-03-13 00:48:38.513624 | orchestrator | 2026-03-13 00:48:38.513628 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-03-13 00:48:38.513632 | orchestrator | Friday 13 March 2026 00:48:12 +0000 (0:00:00.497) 0:00:00.995 ********** 2026-03-13 00:48:38.513635 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:48:38.513639 | orchestrator | 2026-03-13 00:48:38.513643 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-03-13 00:48:38.513646 | orchestrator | Friday 13 March 2026 00:48:13 +0000 (0:00:00.579) 0:00:01.575 ********** 2026-03-13 00:48:38.513653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-13 00:48:38.513667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-13 00:48:38.513672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-13 00:48:38.513676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-13 00:48:38.513684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-13 00:48:38.513693 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-13 00:48:38.513701 | orchestrator | 2026-03-13 00:48:38.513706 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-03-13 00:48:38.513710 | orchestrator | Friday 13 March 2026 00:48:14 +0000 (0:00:01.199) 0:00:02.774 ********** 2026-03-13 00:48:38.513715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-13 00:48:38.513719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-13 00:48:38.513724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-13 00:48:38.513729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-13 00:48:38.513733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-13 00:48:38.513743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-13 00:48:38.513751 | orchestrator | 2026-03-13 00:48:38.513755 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-03-13 00:48:38.513760 | orchestrator | Friday 13 March 2026 00:48:17 +0000 (0:00:02.892) 0:00:05.667 ********** 2026-03-13 00:48:38.513843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-13 00:48:38.513848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-13 00:48:38.513852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-13 00:48:38.513856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-13 00:48:38.513860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-13 00:48:38.513868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-13 00:48:38.513872 | orchestrator | 2026-03-13 00:48:38.513913 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-03-13 00:48:38.513918 | orchestrator | Friday 13 March 2026 00:48:20 +0000 (0:00:02.743) 0:00:08.411 ********** 2026-03-13 00:48:38.513922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-13 00:48:38.513927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-13 00:48:38.513931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-13 00:48:38.513940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-13 00:48:38.513944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-13 00:48:38.513954 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-13 00:48:38.513958 | orchestrator | 2026-03-13 00:48:38.513962 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-13 00:48:38.513966 | orchestrator | Friday 13 March 2026 00:48:22 +0000 (0:00:01.806) 0:00:10.217 ********** 2026-03-13 00:48:38.513970 | orchestrator | 2026-03-13 00:48:38.513974 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-13 00:48:38.513980 | orchestrator | Friday 13 March 2026 00:48:22 +0000 (0:00:00.109) 0:00:10.327 ********** 2026-03-13 00:48:38.513984 | orchestrator | 2026-03-13 00:48:38.513988 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-13 00:48:38.513992 | orchestrator | Friday 13 March 2026 00:48:22 +0000 (0:00:00.081) 0:00:10.408 ********** 2026-03-13 00:48:38.513997 | orchestrator | 2026-03-13 00:48:38.514003 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-03-13 00:48:38.514009 | orchestrator | Friday 13 March 2026 00:48:22 +0000 (0:00:00.094) 0:00:10.503 ********** 2026-03-13 00:48:38.514046 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:48:38.514050 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:48:38.514054 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:48:38.514058 | orchestrator | 2026-03-13 00:48:38.514062 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-03-13 00:48:38.514065 | orchestrator | Friday 13 March 2026 00:48:26 +0000 (0:00:04.456) 0:00:14.960 ********** 2026-03-13 00:48:38.514069 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:48:38.514073 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:48:38.514077 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:48:38.514080 | orchestrator | 2026-03-13 00:48:38.514084 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 00:48:38.514088 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 00:48:38.514092 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 00:48:38.514096 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 00:48:38.514100 | orchestrator | 2026-03-13 00:48:38.514103 | orchestrator | 2026-03-13 00:48:38.514107 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 00:48:38.514111 | orchestrator | Friday 13 March 2026 00:48:35 +0000 (0:00:08.683) 0:00:23.643 ********** 2026-03-13 00:48:38.514114 | orchestrator | =============================================================================== 2026-03-13 00:48:38.514118 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 8.68s 2026-03-13 00:48:38.514122 | orchestrator | redis : Restart redis container ----------------------------------------- 4.46s 2026-03-13 00:48:38.514126 | orchestrator | redis : Copying over default config.json files -------------------------- 2.89s 2026-03-13 00:48:38.514129 | orchestrator | redis : Copying over redis config files --------------------------------- 2.74s 2026-03-13 00:48:38.514133 | orchestrator | redis : Check redis containers ------------------------------------------ 1.81s 2026-03-13 00:48:38.514137 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.20s 2026-03-13 00:48:38.514146 | orchestrator | redis : include_tasks --------------------------------------------------- 0.58s 2026-03-13 00:48:38.514150 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.50s 2026-03-13 00:48:38.514153 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.29s 2026-03-13 00:48:38.514157 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.27s 2026-03-13 00:48:38.514161 | orchestrator | 2026-03-13 00:48:38 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:48:38.514165 | orchestrator | 2026-03-13 00:48:38 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:48:38.514169 | orchestrator | 2026-03-13 00:48:38 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:48:38.514173 | orchestrator | 2026-03-13 00:48:38 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:48:41.541763 | orchestrator | 2026-03-13 00:48:41 | INFO  | Task bf20fdea-193b-4caa-ab99-bd80708e3f64 is in state STARTED 2026-03-13 00:48:41.541940 | orchestrator | 2026-03-13 00:48:41 | INFO  | Task 82622195-3605-41c1-b170-8b2daa6eac88 is in state STARTED 2026-03-13 00:48:41.542660 | orchestrator | 2026-03-13 00:48:41 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:48:41.543239 | orchestrator | 2026-03-13 00:48:41 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:48:41.543759 | orchestrator | 2026-03-13 00:48:41 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:48:41.543780 | orchestrator | 2026-03-13 00:48:41 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:48:44.586619 | orchestrator | 2026-03-13 00:48:44 | INFO  | Task bf20fdea-193b-4caa-ab99-bd80708e3f64 is in state STARTED 2026-03-13 00:48:44.586707 | orchestrator | 2026-03-13 00:48:44 | INFO  | Task 82622195-3605-41c1-b170-8b2daa6eac88 is in state STARTED 2026-03-13 00:48:44.586718 | orchestrator | 2026-03-13 00:48:44 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:48:44.586724 | orchestrator | 2026-03-13 00:48:44 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:48:44.586730 | orchestrator | 2026-03-13 00:48:44 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:48:44.586746 | orchestrator | 2026-03-13 00:48:44 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:48:47.615463 | orchestrator | 2026-03-13 00:48:47 | INFO  | Task bf20fdea-193b-4caa-ab99-bd80708e3f64 is in state STARTED 2026-03-13 00:48:47.615813 | orchestrator | 2026-03-13 00:48:47 | INFO  | Task 82622195-3605-41c1-b170-8b2daa6eac88 is in state STARTED 2026-03-13 00:48:47.616752 | orchestrator | 2026-03-13 00:48:47 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:48:47.619884 | orchestrator | 2026-03-13 00:48:47 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:48:47.621187 | orchestrator | 2026-03-13 00:48:47 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:48:47.621308 | orchestrator | 2026-03-13 00:48:47 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:48:50.660482 | orchestrator | 2026-03-13 00:48:50 | INFO  | Task bf20fdea-193b-4caa-ab99-bd80708e3f64 is in state STARTED 2026-03-13 00:48:50.660730 | orchestrator | 2026-03-13 00:48:50 | INFO  | Task 82622195-3605-41c1-b170-8b2daa6eac88 is in state STARTED 2026-03-13 00:48:50.664543 | orchestrator | 2026-03-13 00:48:50 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:48:50.665131 | orchestrator | 2026-03-13 00:48:50 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:48:50.668468 | orchestrator | 2026-03-13 00:48:50 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:48:50.668512 | orchestrator | 2026-03-13 00:48:50 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:48:53.730208 | orchestrator | 2026-03-13 00:48:53 | INFO  | Task bf20fdea-193b-4caa-ab99-bd80708e3f64 is in state STARTED 2026-03-13 00:48:53.730253 | orchestrator | 2026-03-13 00:48:53 | INFO  | Task 82622195-3605-41c1-b170-8b2daa6eac88 is in state STARTED 2026-03-13 00:48:53.731859 | orchestrator | 2026-03-13 00:48:53 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:48:53.733511 | orchestrator | 2026-03-13 00:48:53 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:48:53.734744 | orchestrator | 2026-03-13 00:48:53 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:48:53.734777 | orchestrator | 2026-03-13 00:48:53 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:48:56.792055 | orchestrator | 2026-03-13 00:48:56 | INFO  | Task bf20fdea-193b-4caa-ab99-bd80708e3f64 is in state STARTED 2026-03-13 00:48:56.792376 | orchestrator | 2026-03-13 00:48:56 | INFO  | Task 82622195-3605-41c1-b170-8b2daa6eac88 is in state STARTED 2026-03-13 00:48:56.793447 | orchestrator | 2026-03-13 00:48:56 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:48:56.794094 | orchestrator | 2026-03-13 00:48:56 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:48:56.794715 | orchestrator | 2026-03-13 00:48:56 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:48:56.794877 | orchestrator | 2026-03-13 00:48:56 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:48:59.840078 | orchestrator | 2026-03-13 00:48:59 | INFO  | Task bf20fdea-193b-4caa-ab99-bd80708e3f64 is in state STARTED 2026-03-13 00:48:59.840131 | orchestrator | 2026-03-13 00:48:59 | INFO  | Task 82622195-3605-41c1-b170-8b2daa6eac88 is in state STARTED 2026-03-13 00:48:59.840137 | orchestrator | 2026-03-13 00:48:59 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:48:59.840141 | orchestrator | 2026-03-13 00:48:59 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:48:59.840145 | orchestrator | 2026-03-13 00:48:59 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:48:59.840149 | orchestrator | 2026-03-13 00:48:59 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:49:02.882863 | orchestrator | 2026-03-13 00:49:02 | INFO  | Task bf20fdea-193b-4caa-ab99-bd80708e3f64 is in state STARTED 2026-03-13 00:49:02.886858 | orchestrator | 2026-03-13 00:49:02 | INFO  | Task 82622195-3605-41c1-b170-8b2daa6eac88 is in state STARTED 2026-03-13 00:49:02.888248 | orchestrator | 2026-03-13 00:49:02 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:49:02.890818 | orchestrator | 2026-03-13 00:49:02 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:49:02.892979 | orchestrator | 2026-03-13 00:49:02 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:49:02.893028 | orchestrator | 2026-03-13 00:49:02 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:49:05.930492 | orchestrator | 2026-03-13 00:49:05 | INFO  | Task f6abaf64-8ae6-4187-9b2a-8de1335f9d32 is in state STARTED 2026-03-13 00:49:06.019119 | orchestrator | 2026-03-13 00:49:05 | INFO  | Task bf20fdea-193b-4caa-ab99-bd80708e3f64 is in state STARTED 2026-03-13 00:49:06.019154 | orchestrator | 2026-03-13 00:49:06.019160 | orchestrator | 2026-03-13 00:49:05 | INFO  | Task 82622195-3605-41c1-b170-8b2daa6eac88 is in state SUCCESS 2026-03-13 00:49:06.019164 | orchestrator | 2026-03-13 00:49:06.019168 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-13 00:49:06.019172 | orchestrator | 2026-03-13 00:49:06.019176 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-13 00:49:06.019180 | orchestrator | Friday 13 March 2026 00:48:11 +0000 (0:00:00.234) 0:00:00.234 ********** 2026-03-13 00:49:06.019186 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:49:06.019191 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:49:06.019194 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:49:06.019198 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:49:06.019202 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:49:06.019206 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:49:06.019209 | orchestrator | 2026-03-13 00:49:06.019213 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-13 00:49:06.019217 | orchestrator | Friday 13 March 2026 00:48:12 +0000 (0:00:00.603) 0:00:00.837 ********** 2026-03-13 00:49:06.019221 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-13 00:49:06.019225 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-13 00:49:06.019228 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-13 00:49:06.019232 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-13 00:49:06.019236 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-13 00:49:06.019240 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-13 00:49:06.019243 | orchestrator | 2026-03-13 00:49:06.019247 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-03-13 00:49:06.019251 | orchestrator | 2026-03-13 00:49:06.019267 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-03-13 00:49:06.019285 | orchestrator | Friday 13 March 2026 00:48:12 +0000 (0:00:00.802) 0:00:01.640 ********** 2026-03-13 00:49:06.019292 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-13 00:49:06.019299 | orchestrator | 2026-03-13 00:49:06.019305 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-13 00:49:06.019308 | orchestrator | Friday 13 March 2026 00:48:14 +0000 (0:00:01.508) 0:00:03.149 ********** 2026-03-13 00:49:06.019312 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-13 00:49:06.019316 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-13 00:49:06.019320 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-13 00:49:06.019324 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-13 00:49:06.019327 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-13 00:49:06.019331 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-13 00:49:06.019335 | orchestrator | 2026-03-13 00:49:06.019338 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-13 00:49:06.019342 | orchestrator | Friday 13 March 2026 00:48:15 +0000 (0:00:01.502) 0:00:04.651 ********** 2026-03-13 00:49:06.019346 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-13 00:49:06.019350 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-13 00:49:06.019354 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-13 00:49:06.019357 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-13 00:49:06.019361 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-13 00:49:06.019365 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-13 00:49:06.019375 | orchestrator | 2026-03-13 00:49:06.019379 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-13 00:49:06.019382 | orchestrator | Friday 13 March 2026 00:48:17 +0000 (0:00:01.646) 0:00:06.297 ********** 2026-03-13 00:49:06.019386 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-03-13 00:49:06.019390 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:49:06.019394 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-03-13 00:49:06.019398 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:49:06.019402 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-03-13 00:49:06.019406 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-03-13 00:49:06.019416 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:49:06.019419 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-03-13 00:49:06.019423 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:49:06.019427 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:49:06.019431 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-03-13 00:49:06.019434 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:49:06.019438 | orchestrator | 2026-03-13 00:49:06.019442 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-03-13 00:49:06.019445 | orchestrator | Friday 13 March 2026 00:48:18 +0000 (0:00:01.104) 0:00:07.402 ********** 2026-03-13 00:49:06.019449 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:49:06.019453 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:49:06.019457 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:49:06.019460 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:49:06.019464 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:49:06.019468 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:49:06.019471 | orchestrator | 2026-03-13 00:49:06.019475 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-03-13 00:49:06.019486 | orchestrator | Friday 13 March 2026 00:48:19 +0000 (0:00:00.634) 0:00:08.036 ********** 2026-03-13 00:49:06.019491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-13 00:49:06.019498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-13 00:49:06.019502 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-13 00:49:06.019509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-13 00:49:06.019521 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-13 00:49:06.019540 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-13 00:49:06.019549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-13 00:49:06.019556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-13 00:49:06.019562 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-13 00:49:06.019573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-13 00:49:06.019582 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-13 00:49:06.019593 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-13 00:49:06.019599 | orchestrator | 2026-03-13 00:49:06.019605 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-03-13 00:49:06.019611 | orchestrator | Friday 13 March 2026 00:48:20 +0000 (0:00:01.401) 0:00:09.438 ********** 2026-03-13 00:49:06.019617 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-13 00:49:06.019623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-13 00:49:06.019635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-13 00:49:06.019642 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-13 00:49:06.019652 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-13 00:49:06.019664 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-13 00:49:06.019670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-13 00:49:06.019677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-13 00:49:06.019687 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-13 00:49:06.019696 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-13 00:49:06.019704 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-13 00:49:06.019715 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-13 00:49:06.019722 | orchestrator | 2026-03-13 00:49:06.019728 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-03-13 00:49:06.019734 | orchestrator | Friday 13 March 2026 00:48:23 +0000 (0:00:02.866) 0:00:12.305 ********** 2026-03-13 00:49:06.019738 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:49:06.019742 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:49:06.019748 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:49:06.019755 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:49:06.019769 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:49:06.019776 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:49:06.019781 | orchestrator | 2026-03-13 00:49:06.019788 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-03-13 00:49:06.019802 | orchestrator | Friday 13 March 2026 00:48:24 +0000 (0:00:01.075) 0:00:13.381 ********** 2026-03-13 00:49:06.019809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-13 00:49:06.019815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-13 00:49:06.019825 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-13 00:49:06.019832 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-13 00:49:06.019844 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-13 00:49:06.019852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-13 00:49:06.019856 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-13 00:49:06.019860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-13 00:49:06.019868 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-13 00:49:06.019875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-13 00:49:06.019880 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-13 00:49:06.019886 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-13 00:49:06.019890 | orchestrator | 2026-03-13 00:49:06.019894 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-13 00:49:06.019898 | orchestrator | Friday 13 March 2026 00:48:26 +0000 (0:00:02.270) 0:00:15.651 ********** 2026-03-13 00:49:06.019902 | orchestrator | 2026-03-13 00:49:06.019906 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-13 00:49:06.019910 | orchestrator | Friday 13 March 2026 00:48:27 +0000 (0:00:00.745) 0:00:16.396 ********** 2026-03-13 00:49:06.019913 | orchestrator | 2026-03-13 00:49:06.019917 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-13 00:49:06.019921 | orchestrator | Friday 13 March 2026 00:48:27 +0000 (0:00:00.228) 0:00:16.625 ********** 2026-03-13 00:49:06.019925 | orchestrator | 2026-03-13 00:49:06.019929 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-13 00:49:06.019932 | orchestrator | Friday 13 March 2026 00:48:27 +0000 (0:00:00.155) 0:00:16.780 ********** 2026-03-13 00:49:06.019936 | orchestrator | 2026-03-13 00:49:06.019940 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-13 00:49:06.019944 | orchestrator | Friday 13 March 2026 00:48:28 +0000 (0:00:00.142) 0:00:16.923 ********** 2026-03-13 00:49:06.019948 | orchestrator | 2026-03-13 00:49:06.019951 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-13 00:49:06.019955 | orchestrator | Friday 13 March 2026 00:48:28 +0000 (0:00:00.131) 0:00:17.054 ********** 2026-03-13 00:49:06.019959 | orchestrator | 2026-03-13 00:49:06.019963 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-03-13 00:49:06.019966 | orchestrator | Friday 13 March 2026 00:48:28 +0000 (0:00:00.133) 0:00:17.187 ********** 2026-03-13 00:49:06.019970 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:49:06.019974 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:49:06.019978 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:49:06.019982 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:49:06.019987 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:49:06.019994 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:49:06.020000 | orchestrator | 2026-03-13 00:49:06.020009 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-03-13 00:49:06.020016 | orchestrator | Friday 13 March 2026 00:48:33 +0000 (0:00:04.661) 0:00:21.849 ********** 2026-03-13 00:49:06.020023 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:49:06.020029 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:49:06.020035 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:49:06.020042 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:49:06.020049 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:49:06.020055 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:49:06.020059 | orchestrator | 2026-03-13 00:49:06.020063 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-13 00:49:06.020067 | orchestrator | Friday 13 March 2026 00:48:34 +0000 (0:00:01.538) 0:00:23.387 ********** 2026-03-13 00:49:06.020070 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:49:06.020074 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:49:06.020082 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:49:06.020086 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:49:06.020093 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:49:06.020097 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:49:06.020101 | orchestrator | 2026-03-13 00:49:06.020105 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-03-13 00:49:06.020109 | orchestrator | Friday 13 March 2026 00:48:39 +0000 (0:00:04.887) 0:00:28.275 ********** 2026-03-13 00:49:06.020113 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-03-13 00:49:06.020117 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-03-13 00:49:06.020121 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-03-13 00:49:06.020131 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-03-13 00:49:06.020138 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-03-13 00:49:06.020142 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-03-13 00:49:06.020146 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-03-13 00:49:06.020153 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-03-13 00:49:06.020159 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-03-13 00:49:06.020167 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-03-13 00:49:06.020176 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-03-13 00:49:06.020182 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-03-13 00:49:06.020188 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-13 00:49:06.020194 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-13 00:49:06.020200 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-13 00:49:06.020206 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-13 00:49:06.020213 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-13 00:49:06.020218 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-13 00:49:06.020223 | orchestrator | 2026-03-13 00:49:06.020230 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-03-13 00:49:06.020236 | orchestrator | Friday 13 March 2026 00:48:48 +0000 (0:00:08.651) 0:00:36.926 ********** 2026-03-13 00:49:06.020242 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-03-13 00:49:06.020249 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:49:06.020284 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-03-13 00:49:06.020289 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:49:06.020293 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-03-13 00:49:06.020296 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:49:06.020300 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-03-13 00:49:06.020305 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-03-13 00:49:06.020309 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-03-13 00:49:06.020317 | orchestrator | 2026-03-13 00:49:06.020321 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-03-13 00:49:06.020325 | orchestrator | Friday 13 March 2026 00:48:50 +0000 (0:00:02.292) 0:00:39.219 ********** 2026-03-13 00:49:06.020329 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-03-13 00:49:06.020333 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:49:06.020336 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-03-13 00:49:06.020340 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:49:06.020344 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-03-13 00:49:06.020348 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:49:06.020352 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-03-13 00:49:06.020356 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-03-13 00:49:06.020359 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-03-13 00:49:06.020363 | orchestrator | 2026-03-13 00:49:06.020367 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-13 00:49:06.020371 | orchestrator | Friday 13 March 2026 00:48:55 +0000 (0:00:05.015) 0:00:44.235 ********** 2026-03-13 00:49:06.020375 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:49:06.020379 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:49:06.020385 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:49:06.020389 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:49:06.020393 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:49:06.020396 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:49:06.020400 | orchestrator | 2026-03-13 00:49:06.020404 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 00:49:06.020408 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-13 00:49:06.020413 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-13 00:49:06.020417 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-13 00:49:06.020424 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-13 00:49:06.020429 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-13 00:49:06.020432 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-13 00:49:06.020436 | orchestrator | 2026-03-13 00:49:06.020440 | orchestrator | 2026-03-13 00:49:06.020444 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 00:49:06.020448 | orchestrator | Friday 13 March 2026 00:49:04 +0000 (0:00:09.091) 0:00:53.326 ********** 2026-03-13 00:49:06.020452 | orchestrator | =============================================================================== 2026-03-13 00:49:06.020456 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 13.98s 2026-03-13 00:49:06.020459 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.65s 2026-03-13 00:49:06.020463 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 5.02s 2026-03-13 00:49:06.020467 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 4.66s 2026-03-13 00:49:06.020471 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.87s 2026-03-13 00:49:06.020475 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.29s 2026-03-13 00:49:06.020478 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.27s 2026-03-13 00:49:06.020486 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.65s 2026-03-13 00:49:06.020490 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.54s 2026-03-13 00:49:06.020494 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.54s 2026-03-13 00:49:06.020498 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.51s 2026-03-13 00:49:06.020502 | orchestrator | module-load : Load modules ---------------------------------------------- 1.50s 2026-03-13 00:49:06.020505 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.40s 2026-03-13 00:49:06.020509 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.10s 2026-03-13 00:49:06.020513 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.08s 2026-03-13 00:49:06.020517 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.80s 2026-03-13 00:49:06.020521 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.63s 2026-03-13 00:49:06.020527 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.60s 2026-03-13 00:49:06.020533 | orchestrator | 2026-03-13 00:49:05 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:49:06.020539 | orchestrator | 2026-03-13 00:49:05 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:49:06.020545 | orchestrator | 2026-03-13 00:49:05 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:49:06.020551 | orchestrator | 2026-03-13 00:49:05 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:49:08.977440 | orchestrator | 2026-03-13 00:49:08 | INFO  | Task f6abaf64-8ae6-4187-9b2a-8de1335f9d32 is in state STARTED 2026-03-13 00:49:08.977721 | orchestrator | 2026-03-13 00:49:08 | INFO  | Task bf20fdea-193b-4caa-ab99-bd80708e3f64 is in state STARTED 2026-03-13 00:49:08.978428 | orchestrator | 2026-03-13 00:49:08 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:49:08.979998 | orchestrator | 2026-03-13 00:49:08 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:49:08.981082 | orchestrator | 2026-03-13 00:49:08 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:49:08.981128 | orchestrator | 2026-03-13 00:49:08 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:49:12.014143 | orchestrator | 2026-03-13 00:49:12 | INFO  | Task f6abaf64-8ae6-4187-9b2a-8de1335f9d32 is in state STARTED 2026-03-13 00:49:12.014734 | orchestrator | 2026-03-13 00:49:12 | INFO  | Task bf20fdea-193b-4caa-ab99-bd80708e3f64 is in state STARTED 2026-03-13 00:49:12.015460 | orchestrator | 2026-03-13 00:49:12 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:49:12.017293 | orchestrator | 2026-03-13 00:49:12 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:49:12.018034 | orchestrator | 2026-03-13 00:49:12 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:49:12.018138 | orchestrator | 2026-03-13 00:49:12 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:49:15.066317 | orchestrator | 2026-03-13 00:49:15 | INFO  | Task f6abaf64-8ae6-4187-9b2a-8de1335f9d32 is in state STARTED 2026-03-13 00:49:15.066461 | orchestrator | 2026-03-13 00:49:15 | INFO  | Task bf20fdea-193b-4caa-ab99-bd80708e3f64 is in state STARTED 2026-03-13 00:49:15.067461 | orchestrator | 2026-03-13 00:49:15 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:49:15.068199 | orchestrator | 2026-03-13 00:49:15 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:49:15.068954 | orchestrator | 2026-03-13 00:49:15 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:49:15.069031 | orchestrator | 2026-03-13 00:49:15 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:49:18.107466 | orchestrator | 2026-03-13 00:49:18 | INFO  | Task f6abaf64-8ae6-4187-9b2a-8de1335f9d32 is in state STARTED 2026-03-13 00:49:18.107689 | orchestrator | 2026-03-13 00:49:18 | INFO  | Task bf20fdea-193b-4caa-ab99-bd80708e3f64 is in state STARTED 2026-03-13 00:49:18.109168 | orchestrator | 2026-03-13 00:49:18 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:49:18.109651 | orchestrator | 2026-03-13 00:49:18 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:49:18.110705 | orchestrator | 2026-03-13 00:49:18 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:49:18.110737 | orchestrator | 2026-03-13 00:49:18 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:49:21.161594 | orchestrator | 2026-03-13 00:49:21 | INFO  | Task f6abaf64-8ae6-4187-9b2a-8de1335f9d32 is in state STARTED 2026-03-13 00:49:21.162245 | orchestrator | 2026-03-13 00:49:21 | INFO  | Task bf20fdea-193b-4caa-ab99-bd80708e3f64 is in state STARTED 2026-03-13 00:49:21.162986 | orchestrator | 2026-03-13 00:49:21 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:49:21.163875 | orchestrator | 2026-03-13 00:49:21 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:49:21.164868 | orchestrator | 2026-03-13 00:49:21 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:49:21.166119 | orchestrator | 2026-03-13 00:49:21 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:49:24.208143 | orchestrator | 2026-03-13 00:49:24 | INFO  | Task f6abaf64-8ae6-4187-9b2a-8de1335f9d32 is in state STARTED 2026-03-13 00:49:24.209864 | orchestrator | 2026-03-13 00:49:24 | INFO  | Task bf20fdea-193b-4caa-ab99-bd80708e3f64 is in state STARTED 2026-03-13 00:49:24.211554 | orchestrator | 2026-03-13 00:49:24 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:49:24.215546 | orchestrator | 2026-03-13 00:49:24 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:49:24.216798 | orchestrator | 2026-03-13 00:49:24 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:49:24.216891 | orchestrator | 2026-03-13 00:49:24 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:49:27.254400 | orchestrator | 2026-03-13 00:49:27 | INFO  | Task f6abaf64-8ae6-4187-9b2a-8de1335f9d32 is in state STARTED 2026-03-13 00:49:27.254572 | orchestrator | 2026-03-13 00:49:27 | INFO  | Task bf20fdea-193b-4caa-ab99-bd80708e3f64 is in state STARTED 2026-03-13 00:49:27.258142 | orchestrator | 2026-03-13 00:49:27 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:49:27.262797 | orchestrator | 2026-03-13 00:49:27 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:49:27.265913 | orchestrator | 2026-03-13 00:49:27 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:49:27.265949 | orchestrator | 2026-03-13 00:49:27 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:49:30.303102 | orchestrator | 2026-03-13 00:49:30 | INFO  | Task f6abaf64-8ae6-4187-9b2a-8de1335f9d32 is in state STARTED 2026-03-13 00:49:30.304790 | orchestrator | 2026-03-13 00:49:30 | INFO  | Task bf20fdea-193b-4caa-ab99-bd80708e3f64 is in state STARTED 2026-03-13 00:49:30.304943 | orchestrator | 2026-03-13 00:49:30 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:49:30.307040 | orchestrator | 2026-03-13 00:49:30 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:49:30.307081 | orchestrator | 2026-03-13 00:49:30 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:49:30.307088 | orchestrator | 2026-03-13 00:49:30 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:49:33.337272 | orchestrator | 2026-03-13 00:49:33 | INFO  | Task f6abaf64-8ae6-4187-9b2a-8de1335f9d32 is in state STARTED 2026-03-13 00:49:33.338287 | orchestrator | 2026-03-13 00:49:33 | INFO  | Task bf20fdea-193b-4caa-ab99-bd80708e3f64 is in state STARTED 2026-03-13 00:49:33.341407 | orchestrator | 2026-03-13 00:49:33 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:49:33.342380 | orchestrator | 2026-03-13 00:49:33 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:49:33.343638 | orchestrator | 2026-03-13 00:49:33 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:49:33.343675 | orchestrator | 2026-03-13 00:49:33 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:49:36.378343 | orchestrator | 2026-03-13 00:49:36 | INFO  | Task f6abaf64-8ae6-4187-9b2a-8de1335f9d32 is in state STARTED 2026-03-13 00:49:36.378404 | orchestrator | 2026-03-13 00:49:36 | INFO  | Task bf20fdea-193b-4caa-ab99-bd80708e3f64 is in state STARTED 2026-03-13 00:49:36.378926 | orchestrator | 2026-03-13 00:49:36 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:49:36.382203 | orchestrator | 2026-03-13 00:49:36 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:49:36.382265 | orchestrator | 2026-03-13 00:49:36 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:49:36.382273 | orchestrator | 2026-03-13 00:49:36 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:49:39.416144 | orchestrator | 2026-03-13 00:49:39 | INFO  | Task f6abaf64-8ae6-4187-9b2a-8de1335f9d32 is in state STARTED 2026-03-13 00:49:39.418460 | orchestrator | 2026-03-13 00:49:39 | INFO  | Task bf20fdea-193b-4caa-ab99-bd80708e3f64 is in state STARTED 2026-03-13 00:49:39.419621 | orchestrator | 2026-03-13 00:49:39 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:49:39.421520 | orchestrator | 2026-03-13 00:49:39 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:49:39.423569 | orchestrator | 2026-03-13 00:49:39 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:49:39.423602 | orchestrator | 2026-03-13 00:49:39 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:49:42.464990 | orchestrator | 2026-03-13 00:49:42 | INFO  | Task f6abaf64-8ae6-4187-9b2a-8de1335f9d32 is in state STARTED 2026-03-13 00:49:42.467304 | orchestrator | 2026-03-13 00:49:42 | INFO  | Task bf20fdea-193b-4caa-ab99-bd80708e3f64 is in state STARTED 2026-03-13 00:49:42.467997 | orchestrator | 2026-03-13 00:49:42 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:49:42.470333 | orchestrator | 2026-03-13 00:49:42 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:49:42.472888 | orchestrator | 2026-03-13 00:49:42 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:49:42.472956 | orchestrator | 2026-03-13 00:49:42 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:49:45.501674 | orchestrator | 2026-03-13 00:49:45 | INFO  | Task f6abaf64-8ae6-4187-9b2a-8de1335f9d32 is in state STARTED 2026-03-13 00:49:45.502731 | orchestrator | 2026-03-13 00:49:45 | INFO  | Task bf20fdea-193b-4caa-ab99-bd80708e3f64 is in state STARTED 2026-03-13 00:49:45.504008 | orchestrator | 2026-03-13 00:49:45 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:49:45.505446 | orchestrator | 2026-03-13 00:49:45 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:49:45.506696 | orchestrator | 2026-03-13 00:49:45 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:49:45.506753 | orchestrator | 2026-03-13 00:49:45 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:49:48.539592 | orchestrator | 2026-03-13 00:49:48 | INFO  | Task f6abaf64-8ae6-4187-9b2a-8de1335f9d32 is in state STARTED 2026-03-13 00:49:48.540050 | orchestrator | 2026-03-13 00:49:48 | INFO  | Task bf20fdea-193b-4caa-ab99-bd80708e3f64 is in state STARTED 2026-03-13 00:49:48.543540 | orchestrator | 2026-03-13 00:49:48 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:49:48.544538 | orchestrator | 2026-03-13 00:49:48 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:49:48.545613 | orchestrator | 2026-03-13 00:49:48 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:49:48.545640 | orchestrator | 2026-03-13 00:49:48 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:49:51.577003 | orchestrator | 2026-03-13 00:49:51 | INFO  | Task f6abaf64-8ae6-4187-9b2a-8de1335f9d32 is in state STARTED 2026-03-13 00:49:51.577918 | orchestrator | 2026-03-13 00:49:51 | INFO  | Task bf20fdea-193b-4caa-ab99-bd80708e3f64 is in state STARTED 2026-03-13 00:49:51.579387 | orchestrator | 2026-03-13 00:49:51 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:49:51.580680 | orchestrator | 2026-03-13 00:49:51 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:49:51.582144 | orchestrator | 2026-03-13 00:49:51 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:49:51.582389 | orchestrator | 2026-03-13 00:49:51 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:49:54.612932 | orchestrator | 2026-03-13 00:49:54 | INFO  | Task f6abaf64-8ae6-4187-9b2a-8de1335f9d32 is in state STARTED 2026-03-13 00:49:54.613781 | orchestrator | 2026-03-13 00:49:54 | INFO  | Task bf20fdea-193b-4caa-ab99-bd80708e3f64 is in state STARTED 2026-03-13 00:49:54.614766 | orchestrator | 2026-03-13 00:49:54 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:49:54.617962 | orchestrator | 2026-03-13 00:49:54 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:49:54.618582 | orchestrator | 2026-03-13 00:49:54 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:49:54.618817 | orchestrator | 2026-03-13 00:49:54 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:49:57.664941 | orchestrator | 2026-03-13 00:49:57 | INFO  | Task f6abaf64-8ae6-4187-9b2a-8de1335f9d32 is in state STARTED 2026-03-13 00:49:57.667148 | orchestrator | 2026-03-13 00:49:57 | INFO  | Task bf20fdea-193b-4caa-ab99-bd80708e3f64 is in state STARTED 2026-03-13 00:49:57.669492 | orchestrator | 2026-03-13 00:49:57 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:49:57.670561 | orchestrator | 2026-03-13 00:49:57 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:49:57.673110 | orchestrator | 2026-03-13 00:49:57 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:49:57.673184 | orchestrator | 2026-03-13 00:49:57 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:50:00.719975 | orchestrator | 2026-03-13 00:50:00 | INFO  | Task f6abaf64-8ae6-4187-9b2a-8de1335f9d32 is in state STARTED 2026-03-13 00:50:00.722789 | orchestrator | 2026-03-13 00:50:00 | INFO  | Task bf20fdea-193b-4caa-ab99-bd80708e3f64 is in state STARTED 2026-03-13 00:50:00.723408 | orchestrator | 2026-03-13 00:50:00 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:50:00.724142 | orchestrator | 2026-03-13 00:50:00 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:50:00.724883 | orchestrator | 2026-03-13 00:50:00 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:50:00.724938 | orchestrator | 2026-03-13 00:50:00 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:50:03.758060 | orchestrator | 2026-03-13 00:50:03 | INFO  | Task f6abaf64-8ae6-4187-9b2a-8de1335f9d32 is in state STARTED 2026-03-13 00:50:03.760862 | orchestrator | 2026-03-13 00:50:03 | INFO  | Task bf20fdea-193b-4caa-ab99-bd80708e3f64 is in state STARTED 2026-03-13 00:50:03.762725 | orchestrator | 2026-03-13 00:50:03 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:50:03.764257 | orchestrator | 2026-03-13 00:50:03 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:50:03.765912 | orchestrator | 2026-03-13 00:50:03 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:50:03.766008 | orchestrator | 2026-03-13 00:50:03 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:50:06.806161 | orchestrator | 2026-03-13 00:50:06 | INFO  | Task f6abaf64-8ae6-4187-9b2a-8de1335f9d32 is in state STARTED 2026-03-13 00:50:06.806227 | orchestrator | 2026-03-13 00:50:06 | INFO  | Task bf20fdea-193b-4caa-ab99-bd80708e3f64 is in state STARTED 2026-03-13 00:50:06.807493 | orchestrator | 2026-03-13 00:50:06 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:50:06.808446 | orchestrator | 2026-03-13 00:50:06 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:50:06.810103 | orchestrator | 2026-03-13 00:50:06 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:50:06.810150 | orchestrator | 2026-03-13 00:50:06 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:50:09.845333 | orchestrator | 2026-03-13 00:50:09 | INFO  | Task f6abaf64-8ae6-4187-9b2a-8de1335f9d32 is in state STARTED 2026-03-13 00:50:09.845383 | orchestrator | 2026-03-13 00:50:09 | INFO  | Task bf20fdea-193b-4caa-ab99-bd80708e3f64 is in state STARTED 2026-03-13 00:50:09.845815 | orchestrator | 2026-03-13 00:50:09 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:50:09.847131 | orchestrator | 2026-03-13 00:50:09 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:50:09.848080 | orchestrator | 2026-03-13 00:50:09 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:50:09.849879 | orchestrator | 2026-03-13 00:50:09 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:50:12.938332 | orchestrator | 2026-03-13 00:50:12 | INFO  | Task f6abaf64-8ae6-4187-9b2a-8de1335f9d32 is in state STARTED 2026-03-13 00:50:12.938386 | orchestrator | 2026-03-13 00:50:12 | INFO  | Task bf20fdea-193b-4caa-ab99-bd80708e3f64 is in state STARTED 2026-03-13 00:50:12.938391 | orchestrator | 2026-03-13 00:50:12 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:50:12.938408 | orchestrator | 2026-03-13 00:50:12 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:50:12.938412 | orchestrator | 2026-03-13 00:50:12 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:50:12.938415 | orchestrator | 2026-03-13 00:50:12 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:50:16.011416 | orchestrator | 2026-03-13 00:50:16 | INFO  | Task f6abaf64-8ae6-4187-9b2a-8de1335f9d32 is in state STARTED 2026-03-13 00:50:16.011480 | orchestrator | 2026-03-13 00:50:16 | INFO  | Task bf20fdea-193b-4caa-ab99-bd80708e3f64 is in state STARTED 2026-03-13 00:50:16.011569 | orchestrator | 2026-03-13 00:50:16 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:50:16.014478 | orchestrator | 2026-03-13 00:50:16 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:50:16.071259 | orchestrator | 2026-03-13 00:50:16 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:50:16.071313 | orchestrator | 2026-03-13 00:50:16 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:50:19.059792 | orchestrator | 2026-03-13 00:50:19 | INFO  | Task f6abaf64-8ae6-4187-9b2a-8de1335f9d32 is in state STARTED 2026-03-13 00:50:19.061421 | orchestrator | 2026-03-13 00:50:19 | INFO  | Task bf20fdea-193b-4caa-ab99-bd80708e3f64 is in state STARTED 2026-03-13 00:50:19.062111 | orchestrator | 2026-03-13 00:50:19 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:50:19.062822 | orchestrator | 2026-03-13 00:50:19 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:50:19.063650 | orchestrator | 2026-03-13 00:50:19 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state STARTED 2026-03-13 00:50:19.063703 | orchestrator | 2026-03-13 00:50:19 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:50:22.255576 | orchestrator | 2026-03-13 00:50:22 | INFO  | Task f6abaf64-8ae6-4187-9b2a-8de1335f9d32 is in state STARTED 2026-03-13 00:50:22.255948 | orchestrator | 2026-03-13 00:50:22 | INFO  | Task bf20fdea-193b-4caa-ab99-bd80708e3f64 is in state STARTED 2026-03-13 00:50:22.256639 | orchestrator | 2026-03-13 00:50:22 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:50:22.257303 | orchestrator | 2026-03-13 00:50:22 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:50:22.259006 | orchestrator | 2026-03-13 00:50:22 | INFO  | Task 12c0419a-a5e8-41f6-a46b-d5c68b40adce is in state SUCCESS 2026-03-13 00:50:22.260193 | orchestrator | 2026-03-13 00:50:22.260231 | orchestrator | 2026-03-13 00:50:22.260237 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-03-13 00:50:22.260242 | orchestrator | 2026-03-13 00:50:22.260247 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-03-13 00:50:22.260251 | orchestrator | Friday 13 March 2026 00:46:01 +0000 (0:00:00.147) 0:00:00.147 ********** 2026-03-13 00:50:22.260256 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:50:22.260261 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:50:22.260265 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:50:22.260272 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:50:22.260280 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:50:22.260289 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:50:22.260295 | orchestrator | 2026-03-13 00:50:22.260302 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-03-13 00:50:22.260310 | orchestrator | Friday 13 March 2026 00:46:02 +0000 (0:00:00.603) 0:00:00.750 ********** 2026-03-13 00:50:22.260317 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:50:22.260342 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:50:22.260350 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:50:22.260356 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:50:22.260363 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:50:22.260369 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:50:22.260376 | orchestrator | 2026-03-13 00:50:22.260383 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-03-13 00:50:22.260387 | orchestrator | Friday 13 March 2026 00:46:02 +0000 (0:00:00.555) 0:00:01.306 ********** 2026-03-13 00:50:22.260391 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:50:22.260394 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:50:22.260398 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:50:22.260402 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:50:22.260406 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:50:22.260410 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:50:22.260413 | orchestrator | 2026-03-13 00:50:22.260417 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-03-13 00:50:22.260421 | orchestrator | Friday 13 March 2026 00:46:03 +0000 (0:00:00.798) 0:00:02.104 ********** 2026-03-13 00:50:22.260425 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:50:22.260429 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:50:22.260432 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:50:22.260436 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:50:22.260440 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:50:22.260443 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:50:22.260447 | orchestrator | 2026-03-13 00:50:22.260451 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-03-13 00:50:22.260454 | orchestrator | Friday 13 March 2026 00:46:06 +0000 (0:00:02.914) 0:00:05.018 ********** 2026-03-13 00:50:22.260458 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:50:22.260462 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:50:22.260465 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:50:22.260469 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:50:22.260472 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:50:22.260476 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:50:22.260480 | orchestrator | 2026-03-13 00:50:22.260483 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-03-13 00:50:22.260487 | orchestrator | Friday 13 March 2026 00:46:07 +0000 (0:00:01.043) 0:00:06.062 ********** 2026-03-13 00:50:22.260491 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:50:22.260494 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:50:22.260499 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:50:22.260502 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:50:22.260506 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:50:22.260510 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:50:22.260513 | orchestrator | 2026-03-13 00:50:22.260517 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-03-13 00:50:22.260521 | orchestrator | Friday 13 March 2026 00:46:08 +0000 (0:00:01.005) 0:00:07.067 ********** 2026-03-13 00:50:22.260524 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:50:22.260528 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:50:22.260532 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:50:22.260535 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:50:22.260539 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:50:22.260543 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:50:22.260546 | orchestrator | 2026-03-13 00:50:22.260550 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-03-13 00:50:22.260554 | orchestrator | Friday 13 March 2026 00:46:09 +0000 (0:00:00.652) 0:00:07.720 ********** 2026-03-13 00:50:22.260557 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:50:22.260561 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:50:22.260565 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:50:22.260568 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:50:22.260575 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:50:22.260579 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:50:22.260583 | orchestrator | 2026-03-13 00:50:22.260586 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-03-13 00:50:22.260590 | orchestrator | Friday 13 March 2026 00:46:09 +0000 (0:00:00.798) 0:00:08.518 ********** 2026-03-13 00:50:22.260594 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-13 00:50:22.260597 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-13 00:50:22.260601 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:50:22.260605 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-13 00:50:22.260608 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-13 00:50:22.260612 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:50:22.260616 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-13 00:50:22.260620 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-13 00:50:22.260623 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:50:22.260631 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-13 00:50:22.260643 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-13 00:50:22.260647 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:50:22.260650 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-13 00:50:22.260654 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-13 00:50:22.260658 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:50:22.260661 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-13 00:50:22.260665 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-13 00:50:22.260669 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:50:22.260672 | orchestrator | 2026-03-13 00:50:22.260676 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-03-13 00:50:22.260680 | orchestrator | Friday 13 March 2026 00:46:10 +0000 (0:00:00.899) 0:00:09.418 ********** 2026-03-13 00:50:22.260683 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:50:22.260687 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:50:22.260691 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:50:22.260694 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:50:22.260698 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:50:22.260702 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:50:22.260706 | orchestrator | 2026-03-13 00:50:22.260709 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-03-13 00:50:22.260713 | orchestrator | Friday 13 March 2026 00:46:12 +0000 (0:00:01.606) 0:00:11.025 ********** 2026-03-13 00:50:22.260717 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:50:22.260721 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:50:22.260725 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:50:22.260728 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:50:22.260732 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:50:22.260736 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:50:22.260739 | orchestrator | 2026-03-13 00:50:22.260743 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-03-13 00:50:22.260747 | orchestrator | Friday 13 March 2026 00:46:13 +0000 (0:00:01.076) 0:00:12.101 ********** 2026-03-13 00:50:22.260750 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:50:22.260754 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:50:22.260758 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:50:22.260762 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:50:22.260765 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:50:22.260769 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:50:22.260775 | orchestrator | 2026-03-13 00:50:22.260779 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-03-13 00:50:22.260782 | orchestrator | Friday 13 March 2026 00:46:19 +0000 (0:00:05.480) 0:00:17.582 ********** 2026-03-13 00:50:22.260786 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:50:22.260790 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:50:22.260794 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:50:22.260797 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:50:22.260801 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:50:22.260805 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:50:22.260808 | orchestrator | 2026-03-13 00:50:22.260812 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-03-13 00:50:22.260816 | orchestrator | Friday 13 March 2026 00:46:20 +0000 (0:00:01.601) 0:00:19.184 ********** 2026-03-13 00:50:22.260820 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:50:22.260823 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:50:22.260827 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:50:22.260831 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:50:22.260834 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:50:22.260838 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:50:22.260842 | orchestrator | 2026-03-13 00:50:22.260846 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-03-13 00:50:22.260850 | orchestrator | Friday 13 March 2026 00:46:25 +0000 (0:00:04.605) 0:00:23.789 ********** 2026-03-13 00:50:22.260854 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:50:22.260857 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:50:22.260861 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:50:22.260865 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:50:22.260868 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:50:22.260872 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:50:22.260876 | orchestrator | 2026-03-13 00:50:22.260879 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-03-13 00:50:22.260883 | orchestrator | Friday 13 March 2026 00:46:26 +0000 (0:00:01.067) 0:00:24.857 ********** 2026-03-13 00:50:22.260887 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-03-13 00:50:22.260891 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-03-13 00:50:22.260895 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-03-13 00:50:22.260898 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-03-13 00:50:22.260902 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:50:22.260906 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-03-13 00:50:22.260909 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-03-13 00:50:22.260913 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:50:22.260917 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-03-13 00:50:22.260920 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-03-13 00:50:22.260924 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:50:22.260928 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-03-13 00:50:22.260931 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-03-13 00:50:22.260935 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:50:22.260939 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:50:22.260943 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-03-13 00:50:22.260946 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-03-13 00:50:22.260950 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:50:22.260954 | orchestrator | 2026-03-13 00:50:22.260959 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-03-13 00:50:22.260965 | orchestrator | Friday 13 March 2026 00:46:27 +0000 (0:00:00.888) 0:00:25.745 ********** 2026-03-13 00:50:22.260969 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:50:22.260973 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:50:22.260979 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:50:22.260983 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:50:22.260986 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:50:22.260990 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:50:22.260994 | orchestrator | 2026-03-13 00:50:22.260997 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-03-13 00:50:22.261001 | orchestrator | Friday 13 March 2026 00:46:27 +0000 (0:00:00.536) 0:00:26.281 ********** 2026-03-13 00:50:22.261005 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:50:22.261009 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:50:22.261012 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:50:22.261016 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:50:22.261020 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:50:22.261023 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:50:22.261027 | orchestrator | 2026-03-13 00:50:22.261031 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-03-13 00:50:22.261035 | orchestrator | 2026-03-13 00:50:22.261038 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-03-13 00:50:22.261042 | orchestrator | Friday 13 March 2026 00:46:29 +0000 (0:00:01.325) 0:00:27.607 ********** 2026-03-13 00:50:22.261046 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:50:22.261049 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:50:22.261053 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:50:22.261057 | orchestrator | 2026-03-13 00:50:22.261061 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-03-13 00:50:22.261064 | orchestrator | Friday 13 March 2026 00:46:30 +0000 (0:00:01.476) 0:00:29.083 ********** 2026-03-13 00:50:22.261068 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:50:22.261072 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:50:22.261076 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:50:22.261079 | orchestrator | 2026-03-13 00:50:22.261083 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-03-13 00:50:22.261087 | orchestrator | Friday 13 March 2026 00:46:32 +0000 (0:00:01.611) 0:00:30.695 ********** 2026-03-13 00:50:22.261091 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:50:22.261094 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:50:22.261098 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:50:22.261102 | orchestrator | 2026-03-13 00:50:22.261105 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-03-13 00:50:22.261109 | orchestrator | Friday 13 March 2026 00:46:32 +0000 (0:00:00.840) 0:00:31.535 ********** 2026-03-13 00:50:22.261113 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:50:22.261116 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:50:22.261120 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:50:22.261124 | orchestrator | 2026-03-13 00:50:22.261127 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-03-13 00:50:22.261131 | orchestrator | Friday 13 March 2026 00:46:34 +0000 (0:00:01.800) 0:00:33.336 ********** 2026-03-13 00:50:22.261135 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:50:22.261139 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:50:22.261142 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:50:22.261146 | orchestrator | 2026-03-13 00:50:22.261150 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-03-13 00:50:22.261154 | orchestrator | Friday 13 March 2026 00:46:35 +0000 (0:00:00.761) 0:00:34.098 ********** 2026-03-13 00:50:22.261157 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:50:22.261161 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:50:22.261165 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:50:22.261169 | orchestrator | 2026-03-13 00:50:22.261172 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-03-13 00:50:22.261176 | orchestrator | Friday 13 March 2026 00:46:36 +0000 (0:00:01.108) 0:00:35.206 ********** 2026-03-13 00:50:22.261180 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:50:22.261184 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:50:22.261189 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:50:22.261193 | orchestrator | 2026-03-13 00:50:22.261197 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-03-13 00:50:22.261201 | orchestrator | Friday 13 March 2026 00:46:38 +0000 (0:00:01.358) 0:00:36.565 ********** 2026-03-13 00:50:22.261204 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:50:22.261208 | orchestrator | 2026-03-13 00:50:22.261226 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-03-13 00:50:22.261233 | orchestrator | Friday 13 March 2026 00:46:39 +0000 (0:00:01.023) 0:00:37.589 ********** 2026-03-13 00:50:22.261239 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:50:22.261245 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:50:22.261251 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:50:22.261259 | orchestrator | 2026-03-13 00:50:22.261266 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-03-13 00:50:22.261273 | orchestrator | Friday 13 March 2026 00:46:42 +0000 (0:00:03.425) 0:00:41.014 ********** 2026-03-13 00:50:22.261280 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:50:22.261286 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:50:22.261292 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:50:22.261299 | orchestrator | 2026-03-13 00:50:22.261305 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-03-13 00:50:22.261311 | orchestrator | Friday 13 March 2026 00:46:43 +0000 (0:00:00.737) 0:00:41.752 ********** 2026-03-13 00:50:22.261317 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:50:22.261323 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:50:22.261329 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:50:22.261343 | orchestrator | 2026-03-13 00:50:22.261348 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-03-13 00:50:22.261356 | orchestrator | Friday 13 March 2026 00:46:44 +0000 (0:00:00.958) 0:00:42.710 ********** 2026-03-13 00:50:22.261360 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:50:22.261364 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:50:22.261370 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:50:22.261374 | orchestrator | 2026-03-13 00:50:22.261378 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-03-13 00:50:22.261385 | orchestrator | Friday 13 March 2026 00:46:46 +0000 (0:00:01.890) 0:00:44.600 ********** 2026-03-13 00:50:22.261389 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:50:22.261393 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:50:22.261397 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:50:22.261400 | orchestrator | 2026-03-13 00:50:22.261404 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-03-13 00:50:22.261408 | orchestrator | Friday 13 March 2026 00:46:47 +0000 (0:00:00.996) 0:00:45.597 ********** 2026-03-13 00:50:22.261412 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:50:22.261415 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:50:22.261419 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:50:22.261423 | orchestrator | 2026-03-13 00:50:22.261426 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-03-13 00:50:22.261430 | orchestrator | Friday 13 March 2026 00:46:47 +0000 (0:00:00.691) 0:00:46.288 ********** 2026-03-13 00:50:22.261434 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:50:22.261438 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:50:22.261441 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:50:22.261445 | orchestrator | 2026-03-13 00:50:22.261449 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-03-13 00:50:22.261453 | orchestrator | Friday 13 March 2026 00:46:49 +0000 (0:00:02.109) 0:00:48.398 ********** 2026-03-13 00:50:22.261456 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:50:22.261460 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:50:22.261464 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:50:22.261468 | orchestrator | 2026-03-13 00:50:22.261471 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-03-13 00:50:22.261479 | orchestrator | Friday 13 March 2026 00:46:52 +0000 (0:00:02.532) 0:00:50.931 ********** 2026-03-13 00:50:22.261483 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:50:22.261486 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:50:22.261490 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:50:22.261494 | orchestrator | 2026-03-13 00:50:22.261498 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-03-13 00:50:22.261502 | orchestrator | Friday 13 March 2026 00:46:53 +0000 (0:00:00.688) 0:00:51.619 ********** 2026-03-13 00:50:22.261506 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-13 00:50:22.261510 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-13 00:50:22.261514 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-13 00:50:22.261518 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-13 00:50:22.261522 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-13 00:50:22.261525 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-13 00:50:22.261529 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-13 00:50:22.261533 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-13 00:50:22.261537 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-13 00:50:22.261540 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-13 00:50:22.261544 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-13 00:50:22.261548 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-13 00:50:22.261552 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:50:22.261556 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:50:22.261559 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:50:22.261563 | orchestrator | 2026-03-13 00:50:22.261567 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-03-13 00:50:22.261571 | orchestrator | Friday 13 March 2026 00:47:36 +0000 (0:00:43.667) 0:01:35.286 ********** 2026-03-13 00:50:22.261575 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:50:22.261579 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:50:22.261582 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:50:22.261681 | orchestrator | 2026-03-13 00:50:22.261686 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-03-13 00:50:22.261690 | orchestrator | Friday 13 March 2026 00:47:37 +0000 (0:00:00.302) 0:01:35.589 ********** 2026-03-13 00:50:22.261702 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:50:22.261706 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:50:22.261710 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:50:22.261728 | orchestrator | 2026-03-13 00:50:22.261737 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-03-13 00:50:22.261743 | orchestrator | Friday 13 March 2026 00:47:38 +0000 (0:00:01.151) 0:01:36.740 ********** 2026-03-13 00:50:22.261747 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:50:22.261763 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:50:22.261767 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:50:22.261771 | orchestrator | 2026-03-13 00:50:22.261777 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-03-13 00:50:22.261781 | orchestrator | Friday 13 March 2026 00:47:39 +0000 (0:00:01.425) 0:01:38.165 ********** 2026-03-13 00:50:22.261785 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:50:22.261789 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:50:22.261793 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:50:22.261796 | orchestrator | 2026-03-13 00:50:22.261800 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-03-13 00:50:22.261804 | orchestrator | Friday 13 March 2026 00:48:02 +0000 (0:00:22.701) 0:02:00.867 ********** 2026-03-13 00:50:22.261808 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:50:22.261812 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:50:22.261824 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:50:22.261837 | orchestrator | 2026-03-13 00:50:22.261841 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-03-13 00:50:22.261845 | orchestrator | Friday 13 March 2026 00:48:02 +0000 (0:00:00.597) 0:02:01.465 ********** 2026-03-13 00:50:22.261848 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:50:22.261852 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:50:22.261856 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:50:22.261859 | orchestrator | 2026-03-13 00:50:22.261863 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-03-13 00:50:22.261867 | orchestrator | Friday 13 March 2026 00:48:03 +0000 (0:00:00.606) 0:02:02.071 ********** 2026-03-13 00:50:22.261871 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:50:22.261875 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:50:22.261879 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:50:22.261883 | orchestrator | 2026-03-13 00:50:22.261886 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-03-13 00:50:22.261890 | orchestrator | Friday 13 March 2026 00:48:04 +0000 (0:00:00.637) 0:02:02.709 ********** 2026-03-13 00:50:22.261894 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:50:22.261897 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:50:22.261901 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:50:22.261905 | orchestrator | 2026-03-13 00:50:22.261910 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-03-13 00:50:22.261913 | orchestrator | Friday 13 March 2026 00:48:04 +0000 (0:00:00.739) 0:02:03.449 ********** 2026-03-13 00:50:22.261917 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:50:22.261921 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:50:22.261925 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:50:22.261928 | orchestrator | 2026-03-13 00:50:22.261932 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-03-13 00:50:22.261936 | orchestrator | Friday 13 March 2026 00:48:05 +0000 (0:00:00.260) 0:02:03.710 ********** 2026-03-13 00:50:22.261940 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:50:22.261943 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:50:22.261947 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:50:22.261951 | orchestrator | 2026-03-13 00:50:22.261954 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-03-13 00:50:22.261958 | orchestrator | Friday 13 March 2026 00:48:05 +0000 (0:00:00.579) 0:02:04.289 ********** 2026-03-13 00:50:22.261962 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:50:22.261966 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:50:22.261969 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:50:22.261973 | orchestrator | 2026-03-13 00:50:22.261977 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-03-13 00:50:22.261981 | orchestrator | Friday 13 March 2026 00:48:06 +0000 (0:00:00.619) 0:02:04.909 ********** 2026-03-13 00:50:22.261984 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:50:22.261988 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:50:22.261992 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:50:22.261998 | orchestrator | 2026-03-13 00:50:22.262002 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-03-13 00:50:22.262006 | orchestrator | Friday 13 March 2026 00:48:07 +0000 (0:00:00.941) 0:02:05.850 ********** 2026-03-13 00:50:22.262009 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:50:22.262047 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:50:22.262052 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:50:22.262055 | orchestrator | 2026-03-13 00:50:22.262059 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-03-13 00:50:22.262063 | orchestrator | Friday 13 March 2026 00:48:08 +0000 (0:00:00.717) 0:02:06.568 ********** 2026-03-13 00:50:22.262067 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:50:22.262071 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:50:22.262074 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:50:22.262078 | orchestrator | 2026-03-13 00:50:22.262082 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-03-13 00:50:22.262086 | orchestrator | Friday 13 March 2026 00:48:08 +0000 (0:00:00.257) 0:02:06.826 ********** 2026-03-13 00:50:22.262089 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:50:22.262093 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:50:22.262097 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:50:22.262101 | orchestrator | 2026-03-13 00:50:22.262104 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-03-13 00:50:22.262108 | orchestrator | Friday 13 March 2026 00:48:08 +0000 (0:00:00.241) 0:02:07.067 ********** 2026-03-13 00:50:22.262112 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:50:22.262116 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:50:22.262120 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:50:22.262123 | orchestrator | 2026-03-13 00:50:22.262127 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-03-13 00:50:22.262131 | orchestrator | Friday 13 March 2026 00:48:09 +0000 (0:00:00.837) 0:02:07.904 ********** 2026-03-13 00:50:22.262135 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:50:22.262138 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:50:22.262142 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:50:22.262146 | orchestrator | 2026-03-13 00:50:22.262150 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-03-13 00:50:22.262154 | orchestrator | Friday 13 March 2026 00:48:10 +0000 (0:00:00.680) 0:02:08.585 ********** 2026-03-13 00:50:22.262158 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-13 00:50:22.262165 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-13 00:50:22.262169 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-13 00:50:22.262173 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-13 00:50:22.262177 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-13 00:50:22.262181 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-13 00:50:22.262185 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-13 00:50:22.262189 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-13 00:50:22.262193 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-13 00:50:22.262197 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-03-13 00:50:22.262200 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-13 00:50:22.262204 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-13 00:50:22.262284 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-03-13 00:50:22.262295 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-13 00:50:22.262302 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-13 00:50:22.262309 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-13 00:50:22.262316 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-13 00:50:22.262322 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-13 00:50:22.262330 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-13 00:50:22.262336 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-13 00:50:22.262343 | orchestrator | 2026-03-13 00:50:22.262349 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-03-13 00:50:22.262356 | orchestrator | 2026-03-13 00:50:22.262362 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-03-13 00:50:22.262369 | orchestrator | Friday 13 March 2026 00:48:12 +0000 (0:00:02.811) 0:02:11.397 ********** 2026-03-13 00:50:22.262375 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:50:22.262382 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:50:22.262388 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:50:22.262391 | orchestrator | 2026-03-13 00:50:22.262395 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-03-13 00:50:22.262399 | orchestrator | Friday 13 March 2026 00:48:13 +0000 (0:00:00.591) 0:02:11.989 ********** 2026-03-13 00:50:22.262403 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:50:22.262407 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:50:22.262410 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:50:22.262414 | orchestrator | 2026-03-13 00:50:22.262418 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-03-13 00:50:22.262715 | orchestrator | Friday 13 March 2026 00:48:14 +0000 (0:00:00.628) 0:02:12.618 ********** 2026-03-13 00:50:22.262731 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:50:22.262735 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:50:22.262739 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:50:22.262742 | orchestrator | 2026-03-13 00:50:22.262747 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-03-13 00:50:22.262753 | orchestrator | Friday 13 March 2026 00:48:14 +0000 (0:00:00.285) 0:02:12.903 ********** 2026-03-13 00:50:22.262759 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-13 00:50:22.262767 | orchestrator | 2026-03-13 00:50:22.262776 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-03-13 00:50:22.262781 | orchestrator | Friday 13 March 2026 00:48:14 +0000 (0:00:00.459) 0:02:13.363 ********** 2026-03-13 00:50:22.262787 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:50:22.262793 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:50:22.262799 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:50:22.262804 | orchestrator | 2026-03-13 00:50:22.262809 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-03-13 00:50:22.262815 | orchestrator | Friday 13 March 2026 00:48:15 +0000 (0:00:00.261) 0:02:13.624 ********** 2026-03-13 00:50:22.262820 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:50:22.262826 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:50:22.262831 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:50:22.262837 | orchestrator | 2026-03-13 00:50:22.262843 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-03-13 00:50:22.262849 | orchestrator | Friday 13 March 2026 00:48:15 +0000 (0:00:00.289) 0:02:13.914 ********** 2026-03-13 00:50:22.262856 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:50:22.262869 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:50:22.262875 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:50:22.262882 | orchestrator | 2026-03-13 00:50:22.262887 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-03-13 00:50:22.262891 | orchestrator | Friday 13 March 2026 00:48:15 +0000 (0:00:00.240) 0:02:14.154 ********** 2026-03-13 00:50:22.262895 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:50:22.262899 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:50:22.262903 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:50:22.262906 | orchestrator | 2026-03-13 00:50:22.262916 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-03-13 00:50:22.262920 | orchestrator | Friday 13 March 2026 00:48:16 +0000 (0:00:00.791) 0:02:14.946 ********** 2026-03-13 00:50:22.262926 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:50:22.262930 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:50:22.262934 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:50:22.262937 | orchestrator | 2026-03-13 00:50:22.262941 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-03-13 00:50:22.262945 | orchestrator | Friday 13 March 2026 00:48:17 +0000 (0:00:01.313) 0:02:16.259 ********** 2026-03-13 00:50:22.262948 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:50:22.262953 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:50:22.262961 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:50:22.262970 | orchestrator | 2026-03-13 00:50:22.262976 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-03-13 00:50:22.262982 | orchestrator | Friday 13 March 2026 00:48:19 +0000 (0:00:01.370) 0:02:17.630 ********** 2026-03-13 00:50:22.262988 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:50:22.262994 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:50:22.263000 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:50:22.263006 | orchestrator | 2026-03-13 00:50:22.263013 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-13 00:50:22.263020 | orchestrator | 2026-03-13 00:50:22.263026 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-13 00:50:22.263032 | orchestrator | Friday 13 March 2026 00:48:29 +0000 (0:00:10.385) 0:02:28.015 ********** 2026-03-13 00:50:22.263038 | orchestrator | ok: [testbed-manager] 2026-03-13 00:50:22.263044 | orchestrator | 2026-03-13 00:50:22.263050 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-13 00:50:22.263057 | orchestrator | Friday 13 March 2026 00:48:30 +0000 (0:00:01.079) 0:02:29.095 ********** 2026-03-13 00:50:22.263063 | orchestrator | changed: [testbed-manager] 2026-03-13 00:50:22.263069 | orchestrator | 2026-03-13 00:50:22.263075 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-13 00:50:22.263081 | orchestrator | Friday 13 March 2026 00:48:31 +0000 (0:00:00.538) 0:02:29.633 ********** 2026-03-13 00:50:22.263088 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-13 00:50:22.263094 | orchestrator | 2026-03-13 00:50:22.263100 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-13 00:50:22.263107 | orchestrator | Friday 13 March 2026 00:48:31 +0000 (0:00:00.540) 0:02:30.174 ********** 2026-03-13 00:50:22.263113 | orchestrator | changed: [testbed-manager] 2026-03-13 00:50:22.263119 | orchestrator | 2026-03-13 00:50:22.263125 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-13 00:50:22.263132 | orchestrator | Friday 13 March 2026 00:48:32 +0000 (0:00:00.977) 0:02:31.151 ********** 2026-03-13 00:50:22.263138 | orchestrator | changed: [testbed-manager] 2026-03-13 00:50:22.263144 | orchestrator | 2026-03-13 00:50:22.263151 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-13 00:50:22.263157 | orchestrator | Friday 13 March 2026 00:48:33 +0000 (0:00:00.563) 0:02:31.714 ********** 2026-03-13 00:50:22.263163 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-13 00:50:22.263170 | orchestrator | 2026-03-13 00:50:22.263176 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-13 00:50:22.263188 | orchestrator | Friday 13 March 2026 00:48:34 +0000 (0:00:01.535) 0:02:33.250 ********** 2026-03-13 00:50:22.263194 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-13 00:50:22.263200 | orchestrator | 2026-03-13 00:50:22.263207 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-13 00:50:22.263245 | orchestrator | Friday 13 March 2026 00:48:35 +0000 (0:00:00.671) 0:02:33.921 ********** 2026-03-13 00:50:22.263252 | orchestrator | changed: [testbed-manager] 2026-03-13 00:50:22.263258 | orchestrator | 2026-03-13 00:50:22.263265 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-13 00:50:22.263271 | orchestrator | Friday 13 March 2026 00:48:35 +0000 (0:00:00.512) 0:02:34.433 ********** 2026-03-13 00:50:22.263278 | orchestrator | changed: [testbed-manager] 2026-03-13 00:50:22.263284 | orchestrator | 2026-03-13 00:50:22.263290 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-03-13 00:50:22.263296 | orchestrator | 2026-03-13 00:50:22.263303 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-03-13 00:50:22.263309 | orchestrator | Friday 13 March 2026 00:48:36 +0000 (0:00:00.424) 0:02:34.858 ********** 2026-03-13 00:50:22.263316 | orchestrator | ok: [testbed-manager] 2026-03-13 00:50:22.263322 | orchestrator | 2026-03-13 00:50:22.263328 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-03-13 00:50:22.263335 | orchestrator | Friday 13 March 2026 00:48:36 +0000 (0:00:00.137) 0:02:34.995 ********** 2026-03-13 00:50:22.263341 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-03-13 00:50:22.263348 | orchestrator | 2026-03-13 00:50:22.263354 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-03-13 00:50:22.263360 | orchestrator | Friday 13 March 2026 00:48:36 +0000 (0:00:00.193) 0:02:35.189 ********** 2026-03-13 00:50:22.263366 | orchestrator | ok: [testbed-manager] 2026-03-13 00:50:22.263373 | orchestrator | 2026-03-13 00:50:22.263379 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-03-13 00:50:22.263385 | orchestrator | Friday 13 March 2026 00:48:37 +0000 (0:00:00.749) 0:02:35.938 ********** 2026-03-13 00:50:22.263392 | orchestrator | ok: [testbed-manager] 2026-03-13 00:50:22.263398 | orchestrator | 2026-03-13 00:50:22.263404 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-03-13 00:50:22.263411 | orchestrator | Friday 13 March 2026 00:48:38 +0000 (0:00:01.238) 0:02:37.177 ********** 2026-03-13 00:50:22.263417 | orchestrator | changed: [testbed-manager] 2026-03-13 00:50:22.263423 | orchestrator | 2026-03-13 00:50:22.263430 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-03-13 00:50:22.263436 | orchestrator | Friday 13 March 2026 00:48:39 +0000 (0:00:00.663) 0:02:37.841 ********** 2026-03-13 00:50:22.263443 | orchestrator | ok: [testbed-manager] 2026-03-13 00:50:22.263449 | orchestrator | 2026-03-13 00:50:22.263460 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-03-13 00:50:22.263467 | orchestrator | Friday 13 March 2026 00:48:39 +0000 (0:00:00.392) 0:02:38.234 ********** 2026-03-13 00:50:22.263477 | orchestrator | changed: [testbed-manager] 2026-03-13 00:50:22.263484 | orchestrator | 2026-03-13 00:50:22.263490 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-03-13 00:50:22.263497 | orchestrator | Friday 13 March 2026 00:48:46 +0000 (0:00:07.035) 0:02:45.270 ********** 2026-03-13 00:50:22.263503 | orchestrator | changed: [testbed-manager] 2026-03-13 00:50:22.263510 | orchestrator | 2026-03-13 00:50:22.263516 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-03-13 00:50:22.263523 | orchestrator | Friday 13 March 2026 00:49:01 +0000 (0:00:15.035) 0:03:00.305 ********** 2026-03-13 00:50:22.263529 | orchestrator | ok: [testbed-manager] 2026-03-13 00:50:22.263536 | orchestrator | 2026-03-13 00:50:22.263542 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-03-13 00:50:22.263548 | orchestrator | 2026-03-13 00:50:22.263555 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-03-13 00:50:22.263568 | orchestrator | Friday 13 March 2026 00:49:02 +0000 (0:00:00.577) 0:03:00.883 ********** 2026-03-13 00:50:22.263575 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:50:22.263581 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:50:22.263587 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:50:22.263594 | orchestrator | 2026-03-13 00:50:22.263600 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-03-13 00:50:22.263607 | orchestrator | Friday 13 March 2026 00:49:02 +0000 (0:00:00.347) 0:03:01.230 ********** 2026-03-13 00:50:22.263613 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:50:22.263619 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:50:22.263626 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:50:22.263632 | orchestrator | 2026-03-13 00:50:22.263638 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-03-13 00:50:22.263645 | orchestrator | Friday 13 March 2026 00:49:02 +0000 (0:00:00.288) 0:03:01.518 ********** 2026-03-13 00:50:22.263651 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:50:22.263657 | orchestrator | 2026-03-13 00:50:22.263663 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-03-13 00:50:22.263670 | orchestrator | Friday 13 March 2026 00:49:03 +0000 (0:00:00.590) 0:03:02.109 ********** 2026-03-13 00:50:22.263676 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-13 00:50:22.263682 | orchestrator | 2026-03-13 00:50:22.263689 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-03-13 00:50:22.263695 | orchestrator | Friday 13 March 2026 00:49:04 +0000 (0:00:00.904) 0:03:03.013 ********** 2026-03-13 00:50:22.263701 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-13 00:50:22.263708 | orchestrator | 2026-03-13 00:50:22.263715 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-03-13 00:50:22.263721 | orchestrator | Friday 13 March 2026 00:49:05 +0000 (0:00:00.991) 0:03:04.005 ********** 2026-03-13 00:50:22.263727 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:50:22.263734 | orchestrator | 2026-03-13 00:50:22.263741 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-03-13 00:50:22.263747 | orchestrator | Friday 13 March 2026 00:49:05 +0000 (0:00:00.129) 0:03:04.134 ********** 2026-03-13 00:50:22.263754 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-13 00:50:22.263760 | orchestrator | 2026-03-13 00:50:22.263766 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-03-13 00:50:22.263773 | orchestrator | Friday 13 March 2026 00:49:06 +0000 (0:00:01.028) 0:03:05.162 ********** 2026-03-13 00:50:22.263779 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:50:22.263785 | orchestrator | 2026-03-13 00:50:22.263792 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-03-13 00:50:22.263798 | orchestrator | Friday 13 March 2026 00:49:06 +0000 (0:00:00.118) 0:03:05.280 ********** 2026-03-13 00:50:22.263805 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:50:22.263811 | orchestrator | 2026-03-13 00:50:22.263817 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-03-13 00:50:22.263823 | orchestrator | Friday 13 March 2026 00:49:06 +0000 (0:00:00.109) 0:03:05.390 ********** 2026-03-13 00:50:22.263830 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:50:22.263836 | orchestrator | 2026-03-13 00:50:22.263842 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-03-13 00:50:22.263849 | orchestrator | Friday 13 March 2026 00:49:06 +0000 (0:00:00.139) 0:03:05.530 ********** 2026-03-13 00:50:22.263855 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:50:22.263862 | orchestrator | 2026-03-13 00:50:22.263868 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-03-13 00:50:22.263875 | orchestrator | Friday 13 March 2026 00:49:07 +0000 (0:00:00.110) 0:03:05.640 ********** 2026-03-13 00:50:22.263882 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-13 00:50:22.263893 | orchestrator | 2026-03-13 00:50:22.263900 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-03-13 00:50:22.263906 | orchestrator | Friday 13 March 2026 00:49:13 +0000 (0:00:05.992) 0:03:11.633 ********** 2026-03-13 00:50:22.263913 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-03-13 00:50:22.263919 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-03-13 00:50:22.263926 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-03-13 00:50:22.263932 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-03-13 00:50:22.263938 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-03-13 00:50:22.263945 | orchestrator | 2026-03-13 00:50:22.263951 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-03-13 00:50:22.263957 | orchestrator | Friday 13 March 2026 00:49:56 +0000 (0:00:42.956) 0:03:54.591 ********** 2026-03-13 00:50:22.263968 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-13 00:50:22.263975 | orchestrator | 2026-03-13 00:50:22.263981 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-03-13 00:50:22.263991 | orchestrator | Friday 13 March 2026 00:49:57 +0000 (0:00:01.611) 0:03:56.203 ********** 2026-03-13 00:50:22.263998 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-13 00:50:22.264004 | orchestrator | 2026-03-13 00:50:22.264010 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-03-13 00:50:22.264017 | orchestrator | Friday 13 March 2026 00:49:59 +0000 (0:00:01.409) 0:03:57.613 ********** 2026-03-13 00:50:22.264023 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-13 00:50:22.264029 | orchestrator | 2026-03-13 00:50:22.264035 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-03-13 00:50:22.264041 | orchestrator | Friday 13 March 2026 00:50:00 +0000 (0:00:01.096) 0:03:58.709 ********** 2026-03-13 00:50:22.264048 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:50:22.264054 | orchestrator | 2026-03-13 00:50:22.264060 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-03-13 00:50:22.264067 | orchestrator | Friday 13 March 2026 00:50:00 +0000 (0:00:00.127) 0:03:58.836 ********** 2026-03-13 00:50:22.264073 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-03-13 00:50:22.264079 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-03-13 00:50:22.264086 | orchestrator | 2026-03-13 00:50:22.264092 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-03-13 00:50:22.264099 | orchestrator | Friday 13 March 2026 00:50:01 +0000 (0:00:01.432) 0:04:00.269 ********** 2026-03-13 00:50:22.264105 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:50:22.264111 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:50:22.264118 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:50:22.264124 | orchestrator | 2026-03-13 00:50:22.264130 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-03-13 00:50:22.264137 | orchestrator | Friday 13 March 2026 00:50:01 +0000 (0:00:00.278) 0:04:00.548 ********** 2026-03-13 00:50:22.264143 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:50:22.264150 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:50:22.264156 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:50:22.264163 | orchestrator | 2026-03-13 00:50:22.264169 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-03-13 00:50:22.264176 | orchestrator | 2026-03-13 00:50:22.264182 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-03-13 00:50:22.264188 | orchestrator | Friday 13 March 2026 00:50:02 +0000 (0:00:00.937) 0:04:01.485 ********** 2026-03-13 00:50:22.264195 | orchestrator | ok: [testbed-manager] 2026-03-13 00:50:22.264201 | orchestrator | 2026-03-13 00:50:22.264208 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-03-13 00:50:22.264232 | orchestrator | Friday 13 March 2026 00:50:03 +0000 (0:00:00.117) 0:04:01.603 ********** 2026-03-13 00:50:22.264238 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-03-13 00:50:22.264245 | orchestrator | 2026-03-13 00:50:22.264252 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-03-13 00:50:22.264258 | orchestrator | Friday 13 March 2026 00:50:03 +0000 (0:00:00.209) 0:04:01.813 ********** 2026-03-13 00:50:22.264264 | orchestrator | changed: [testbed-manager] 2026-03-13 00:50:22.264270 | orchestrator | 2026-03-13 00:50:22.264277 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-03-13 00:50:22.264283 | orchestrator | 2026-03-13 00:50:22.264289 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-03-13 00:50:22.264295 | orchestrator | Friday 13 March 2026 00:50:08 +0000 (0:00:05.545) 0:04:07.359 ********** 2026-03-13 00:50:22.264302 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:50:22.264308 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:50:22.264315 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:50:22.264321 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:50:22.264327 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:50:22.264333 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:50:22.264340 | orchestrator | 2026-03-13 00:50:22.264346 | orchestrator | TASK [Manage labels] *********************************************************** 2026-03-13 00:50:22.264353 | orchestrator | Friday 13 March 2026 00:50:09 +0000 (0:00:00.774) 0:04:08.133 ********** 2026-03-13 00:50:22.264359 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-13 00:50:22.264365 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-13 00:50:22.264372 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-13 00:50:22.264378 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-13 00:50:22.264385 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-13 00:50:22.264391 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-13 00:50:22.264397 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-13 00:50:22.264404 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-13 00:50:22.264410 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-13 00:50:22.264417 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-13 00:50:22.264423 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-13 00:50:22.264429 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-13 00:50:22.264440 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-13 00:50:22.264447 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-13 00:50:22.264456 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-13 00:50:22.264462 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-13 00:50:22.264469 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-13 00:50:22.264475 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-13 00:50:22.264482 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-13 00:50:22.264488 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-13 00:50:22.264494 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-13 00:50:22.264505 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-13 00:50:22.264511 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-13 00:50:22.264518 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-13 00:50:22.264524 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-13 00:50:22.264530 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-13 00:50:22.264537 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-13 00:50:22.264544 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-13 00:50:22.264550 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-13 00:50:22.264556 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-13 00:50:22.264563 | orchestrator | 2026-03-13 00:50:22.264569 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-03-13 00:50:22.264575 | orchestrator | Friday 13 March 2026 00:50:20 +0000 (0:00:10.866) 0:04:18.999 ********** 2026-03-13 00:50:22.264582 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:50:22.264588 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:50:22.264594 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:50:22.264601 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:50:22.264607 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:50:22.264614 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:50:22.264620 | orchestrator | 2026-03-13 00:50:22.264626 | orchestrator | TASK [Manage taints] *********************************************************** 2026-03-13 00:50:22.264633 | orchestrator | Friday 13 March 2026 00:50:20 +0000 (0:00:00.527) 0:04:19.527 ********** 2026-03-13 00:50:22.264639 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:50:22.264646 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:50:22.264652 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:50:22.264658 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:50:22.264665 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:50:22.264671 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:50:22.264677 | orchestrator | 2026-03-13 00:50:22.264684 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 00:50:22.264690 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 00:50:22.264698 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-13 00:50:22.264704 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-13 00:50:22.264711 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-13 00:50:22.264717 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-13 00:50:22.264723 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-13 00:50:22.264729 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-13 00:50:22.264735 | orchestrator | 2026-03-13 00:50:22.264742 | orchestrator | 2026-03-13 00:50:22.264748 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 00:50:22.264755 | orchestrator | Friday 13 March 2026 00:50:21 +0000 (0:00:00.683) 0:04:20.210 ********** 2026-03-13 00:50:22.264765 | orchestrator | =============================================================================== 2026-03-13 00:50:22.264772 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 43.67s 2026-03-13 00:50:22.264778 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 42.96s 2026-03-13 00:50:22.264784 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 22.70s 2026-03-13 00:50:22.264794 | orchestrator | kubectl : Install required packages ------------------------------------ 15.04s 2026-03-13 00:50:22.264800 | orchestrator | Manage labels ---------------------------------------------------------- 10.87s 2026-03-13 00:50:22.264812 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.39s 2026-03-13 00:50:22.264819 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 7.04s 2026-03-13 00:50:22.264825 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.99s 2026-03-13 00:50:22.264832 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.55s 2026-03-13 00:50:22.264838 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.48s 2026-03-13 00:50:22.264844 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 4.60s 2026-03-13 00:50:22.264850 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 3.43s 2026-03-13 00:50:22.264857 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.91s 2026-03-13 00:50:22.264863 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 2.81s 2026-03-13 00:50:22.264870 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.53s 2026-03-13 00:50:22.264876 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.11s 2026-03-13 00:50:22.264883 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 1.89s 2026-03-13 00:50:22.264890 | orchestrator | k3s_server : Clean previous runs of k3s-init ---------------------------- 1.80s 2026-03-13 00:50:22.264896 | orchestrator | k3s_server_post : Set _cilium_bgp_neighbors fact ------------------------ 1.61s 2026-03-13 00:50:22.264903 | orchestrator | k3s_server : Stop k3s-init ---------------------------------------------- 1.61s 2026-03-13 00:50:22.264909 | orchestrator | 2026-03-13 00:50:22 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:50:25.317001 | orchestrator | 2026-03-13 00:50:25 | INFO  | Task f6abaf64-8ae6-4187-9b2a-8de1335f9d32 is in state STARTED 2026-03-13 00:50:25.317899 | orchestrator | 2026-03-13 00:50:25 | INFO  | Task bf20fdea-193b-4caa-ab99-bd80708e3f64 is in state STARTED 2026-03-13 00:50:25.318695 | orchestrator | 2026-03-13 00:50:25 | INFO  | Task 96234c1e-2402-4cad-9f09-02a58188af75 is in state STARTED 2026-03-13 00:50:25.319660 | orchestrator | 2026-03-13 00:50:25 | INFO  | Task 6751781d-2a58-47b9-a12d-3437349865ea is in state STARTED 2026-03-13 00:50:25.321071 | orchestrator | 2026-03-13 00:50:25 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:50:25.321743 | orchestrator | 2026-03-13 00:50:25 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:50:25.321794 | orchestrator | 2026-03-13 00:50:25 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:50:28.356445 | orchestrator | 2026-03-13 00:50:28 | INFO  | Task f6abaf64-8ae6-4187-9b2a-8de1335f9d32 is in state STARTED 2026-03-13 00:50:28.357066 | orchestrator | 2026-03-13 00:50:28 | INFO  | Task bf20fdea-193b-4caa-ab99-bd80708e3f64 is in state STARTED 2026-03-13 00:50:28.357676 | orchestrator | 2026-03-13 00:50:28 | INFO  | Task 96234c1e-2402-4cad-9f09-02a58188af75 is in state SUCCESS 2026-03-13 00:50:28.358320 | orchestrator | 2026-03-13 00:50:28 | INFO  | Task 6751781d-2a58-47b9-a12d-3437349865ea is in state STARTED 2026-03-13 00:50:28.359307 | orchestrator | 2026-03-13 00:50:28 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:50:28.359791 | orchestrator | 2026-03-13 00:50:28 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:50:28.359820 | orchestrator | 2026-03-13 00:50:28 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:50:31.391413 | orchestrator | 2026-03-13 00:50:31 | INFO  | Task f6abaf64-8ae6-4187-9b2a-8de1335f9d32 is in state STARTED 2026-03-13 00:50:31.391757 | orchestrator | 2026-03-13 00:50:31 | INFO  | Task bf20fdea-193b-4caa-ab99-bd80708e3f64 is in state STARTED 2026-03-13 00:50:31.393909 | orchestrator | 2026-03-13 00:50:31 | INFO  | Task 6751781d-2a58-47b9-a12d-3437349865ea is in state STARTED 2026-03-13 00:50:31.394448 | orchestrator | 2026-03-13 00:50:31 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:50:31.395224 | orchestrator | 2026-03-13 00:50:31 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:50:31.395277 | orchestrator | 2026-03-13 00:50:31 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:50:34.437610 | orchestrator | 2026-03-13 00:50:34 | INFO  | Task f6abaf64-8ae6-4187-9b2a-8de1335f9d32 is in state STARTED 2026-03-13 00:50:34.439462 | orchestrator | 2026-03-13 00:50:34 | INFO  | Task bf20fdea-193b-4caa-ab99-bd80708e3f64 is in state STARTED 2026-03-13 00:50:34.440405 | orchestrator | 2026-03-13 00:50:34 | INFO  | Task 6751781d-2a58-47b9-a12d-3437349865ea is in state SUCCESS 2026-03-13 00:50:34.442540 | orchestrator | 2026-03-13 00:50:34 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:50:34.444350 | orchestrator | 2026-03-13 00:50:34 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:50:34.444403 | orchestrator | 2026-03-13 00:50:34 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:50:37.495477 | orchestrator | 2026-03-13 00:50:37 | INFO  | Task f6abaf64-8ae6-4187-9b2a-8de1335f9d32 is in state STARTED 2026-03-13 00:50:37.496981 | orchestrator | 2026-03-13 00:50:37 | INFO  | Task bf20fdea-193b-4caa-ab99-bd80708e3f64 is in state STARTED 2026-03-13 00:50:37.497453 | orchestrator | 2026-03-13 00:50:37 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:50:37.498099 | orchestrator | 2026-03-13 00:50:37 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:50:37.498289 | orchestrator | 2026-03-13 00:50:37 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:50:40.534538 | orchestrator | 2026-03-13 00:50:40 | INFO  | Task f6abaf64-8ae6-4187-9b2a-8de1335f9d32 is in state STARTED 2026-03-13 00:50:40.535956 | orchestrator | 2026-03-13 00:50:40 | INFO  | Task bf20fdea-193b-4caa-ab99-bd80708e3f64 is in state SUCCESS 2026-03-13 00:50:40.537033 | orchestrator | 2026-03-13 00:50:40.537055 | orchestrator | 2026-03-13 00:50:40.537062 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-03-13 00:50:40.537068 | orchestrator | 2026-03-13 00:50:40.537074 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-13 00:50:40.537079 | orchestrator | Friday 13 March 2026 00:50:25 +0000 (0:00:00.150) 0:00:00.150 ********** 2026-03-13 00:50:40.537086 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-13 00:50:40.537092 | orchestrator | 2026-03-13 00:50:40.537097 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-13 00:50:40.537103 | orchestrator | Friday 13 March 2026 00:50:26 +0000 (0:00:00.716) 0:00:00.867 ********** 2026-03-13 00:50:40.537109 | orchestrator | changed: [testbed-manager] 2026-03-13 00:50:40.537115 | orchestrator | 2026-03-13 00:50:40.537136 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-03-13 00:50:40.537141 | orchestrator | Friday 13 March 2026 00:50:27 +0000 (0:00:01.047) 0:00:01.914 ********** 2026-03-13 00:50:40.537147 | orchestrator | changed: [testbed-manager] 2026-03-13 00:50:40.537153 | orchestrator | 2026-03-13 00:50:40.537158 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 00:50:40.537164 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 00:50:40.537170 | orchestrator | 2026-03-13 00:50:40.537176 | orchestrator | 2026-03-13 00:50:40.537181 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 00:50:40.537187 | orchestrator | Friday 13 March 2026 00:50:27 +0000 (0:00:00.410) 0:00:02.324 ********** 2026-03-13 00:50:40.537219 | orchestrator | =============================================================================== 2026-03-13 00:50:40.537225 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.05s 2026-03-13 00:50:40.537230 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.72s 2026-03-13 00:50:40.537236 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.41s 2026-03-13 00:50:40.537241 | orchestrator | 2026-03-13 00:50:40.537247 | orchestrator | 2026-03-13 00:50:40.537252 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-13 00:50:40.537257 | orchestrator | 2026-03-13 00:50:40.537263 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-13 00:50:40.537268 | orchestrator | Friday 13 March 2026 00:50:25 +0000 (0:00:00.195) 0:00:00.195 ********** 2026-03-13 00:50:40.537273 | orchestrator | ok: [testbed-manager] 2026-03-13 00:50:40.537279 | orchestrator | 2026-03-13 00:50:40.537284 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-13 00:50:40.537290 | orchestrator | Friday 13 March 2026 00:50:26 +0000 (0:00:00.525) 0:00:00.721 ********** 2026-03-13 00:50:40.537295 | orchestrator | ok: [testbed-manager] 2026-03-13 00:50:40.537301 | orchestrator | 2026-03-13 00:50:40.537307 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-13 00:50:40.537313 | orchestrator | Friday 13 March 2026 00:50:26 +0000 (0:00:00.570) 0:00:01.291 ********** 2026-03-13 00:50:40.537318 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-13 00:50:40.537324 | orchestrator | 2026-03-13 00:50:40.537329 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-13 00:50:40.537335 | orchestrator | Friday 13 March 2026 00:50:27 +0000 (0:00:00.804) 0:00:02.096 ********** 2026-03-13 00:50:40.537341 | orchestrator | changed: [testbed-manager] 2026-03-13 00:50:40.537346 | orchestrator | 2026-03-13 00:50:40.537352 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-13 00:50:40.537358 | orchestrator | Friday 13 March 2026 00:50:28 +0000 (0:00:01.311) 0:00:03.407 ********** 2026-03-13 00:50:40.537363 | orchestrator | changed: [testbed-manager] 2026-03-13 00:50:40.537369 | orchestrator | 2026-03-13 00:50:40.537374 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-13 00:50:40.537380 | orchestrator | Friday 13 March 2026 00:50:29 +0000 (0:00:00.477) 0:00:03.885 ********** 2026-03-13 00:50:40.537386 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-13 00:50:40.537391 | orchestrator | 2026-03-13 00:50:40.537397 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-13 00:50:40.537403 | orchestrator | Friday 13 March 2026 00:50:30 +0000 (0:00:01.277) 0:00:05.162 ********** 2026-03-13 00:50:40.537408 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-13 00:50:40.537414 | orchestrator | 2026-03-13 00:50:40.537419 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-13 00:50:40.537433 | orchestrator | Friday 13 March 2026 00:50:31 +0000 (0:00:00.757) 0:00:05.920 ********** 2026-03-13 00:50:40.537438 | orchestrator | ok: [testbed-manager] 2026-03-13 00:50:40.537448 | orchestrator | 2026-03-13 00:50:40.537454 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-13 00:50:40.537459 | orchestrator | Friday 13 March 2026 00:50:31 +0000 (0:00:00.353) 0:00:06.273 ********** 2026-03-13 00:50:40.537464 | orchestrator | ok: [testbed-manager] 2026-03-13 00:50:40.537470 | orchestrator | 2026-03-13 00:50:40.537475 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 00:50:40.537480 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 00:50:40.537486 | orchestrator | 2026-03-13 00:50:40.537492 | orchestrator | 2026-03-13 00:50:40.537497 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 00:50:40.537502 | orchestrator | Friday 13 March 2026 00:50:32 +0000 (0:00:00.267) 0:00:06.540 ********** 2026-03-13 00:50:40.537507 | orchestrator | =============================================================================== 2026-03-13 00:50:40.537512 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.31s 2026-03-13 00:50:40.537516 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.28s 2026-03-13 00:50:40.537521 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.80s 2026-03-13 00:50:40.537533 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.76s 2026-03-13 00:50:40.537539 | orchestrator | Create .kube directory -------------------------------------------------- 0.57s 2026-03-13 00:50:40.537545 | orchestrator | Get home directory of operator user ------------------------------------- 0.53s 2026-03-13 00:50:40.537551 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.48s 2026-03-13 00:50:40.537556 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.35s 2026-03-13 00:50:40.537562 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.27s 2026-03-13 00:50:40.537567 | orchestrator | 2026-03-13 00:50:40.537573 | orchestrator | 2026-03-13 00:50:40.537578 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-03-13 00:50:40.537584 | orchestrator | 2026-03-13 00:50:40.537589 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-03-13 00:50:40.537595 | orchestrator | Friday 13 March 2026 00:48:26 +0000 (0:00:00.079) 0:00:00.079 ********** 2026-03-13 00:50:40.537600 | orchestrator | ok: [localhost] => { 2026-03-13 00:50:40.537606 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-03-13 00:50:40.537611 | orchestrator | } 2026-03-13 00:50:40.537617 | orchestrator | 2026-03-13 00:50:40.537623 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-03-13 00:50:40.537628 | orchestrator | Friday 13 March 2026 00:48:26 +0000 (0:00:00.052) 0:00:00.131 ********** 2026-03-13 00:50:40.537634 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-03-13 00:50:40.537640 | orchestrator | ...ignoring 2026-03-13 00:50:40.537646 | orchestrator | 2026-03-13 00:50:40.537651 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-03-13 00:50:40.537659 | orchestrator | Friday 13 March 2026 00:48:29 +0000 (0:00:03.478) 0:00:03.610 ********** 2026-03-13 00:50:40.537672 | orchestrator | skipping: [localhost] 2026-03-13 00:50:40.537685 | orchestrator | 2026-03-13 00:50:40.537698 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-03-13 00:50:40.537710 | orchestrator | Friday 13 March 2026 00:48:30 +0000 (0:00:00.205) 0:00:03.815 ********** 2026-03-13 00:50:40.537723 | orchestrator | ok: [localhost] 2026-03-13 00:50:40.537736 | orchestrator | 2026-03-13 00:50:40.537748 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-13 00:50:40.537761 | orchestrator | 2026-03-13 00:50:40.537772 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-13 00:50:40.537785 | orchestrator | Friday 13 March 2026 00:48:30 +0000 (0:00:00.287) 0:00:04.103 ********** 2026-03-13 00:50:40.537804 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:50:40.537816 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:50:40.537829 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:50:40.537842 | orchestrator | 2026-03-13 00:50:40.537856 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-13 00:50:40.537867 | orchestrator | Friday 13 March 2026 00:48:30 +0000 (0:00:00.403) 0:00:04.507 ********** 2026-03-13 00:50:40.537878 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-03-13 00:50:40.537889 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-03-13 00:50:40.537903 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-03-13 00:50:40.538007 | orchestrator | 2026-03-13 00:50:40.538072 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-03-13 00:50:40.538078 | orchestrator | 2026-03-13 00:50:40.538084 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-13 00:50:40.538090 | orchestrator | Friday 13 March 2026 00:48:31 +0000 (0:00:00.691) 0:00:05.199 ********** 2026-03-13 00:50:40.538096 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:50:40.538102 | orchestrator | 2026-03-13 00:50:40.538108 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-13 00:50:40.538114 | orchestrator | Friday 13 March 2026 00:48:32 +0000 (0:00:00.748) 0:00:05.947 ********** 2026-03-13 00:50:40.538119 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:50:40.538125 | orchestrator | 2026-03-13 00:50:40.538130 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-03-13 00:50:40.538136 | orchestrator | Friday 13 March 2026 00:48:33 +0000 (0:00:01.173) 0:00:07.121 ********** 2026-03-13 00:50:40.538142 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:50:40.538147 | orchestrator | 2026-03-13 00:50:40.538157 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-03-13 00:50:40.538163 | orchestrator | Friday 13 March 2026 00:48:34 +0000 (0:00:00.884) 0:00:08.006 ********** 2026-03-13 00:50:40.538169 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:50:40.538174 | orchestrator | 2026-03-13 00:50:40.538180 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-03-13 00:50:40.538186 | orchestrator | Friday 13 March 2026 00:48:35 +0000 (0:00:01.145) 0:00:09.151 ********** 2026-03-13 00:50:40.538202 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:50:40.538208 | orchestrator | 2026-03-13 00:50:40.538214 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-03-13 00:50:40.538219 | orchestrator | Friday 13 March 2026 00:48:36 +0000 (0:00:00.577) 0:00:09.728 ********** 2026-03-13 00:50:40.538225 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:50:40.538230 | orchestrator | 2026-03-13 00:50:40.538236 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-13 00:50:40.538241 | orchestrator | Friday 13 March 2026 00:48:37 +0000 (0:00:01.777) 0:00:11.506 ********** 2026-03-13 00:50:40.538247 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:50:40.538252 | orchestrator | 2026-03-13 00:50:40.538258 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-13 00:50:40.538268 | orchestrator | Friday 13 March 2026 00:48:38 +0000 (0:00:00.661) 0:00:12.167 ********** 2026-03-13 00:50:40.538274 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:50:40.538279 | orchestrator | 2026-03-13 00:50:40.538285 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-03-13 00:50:40.538290 | orchestrator | Friday 13 March 2026 00:48:39 +0000 (0:00:00.905) 0:00:13.073 ********** 2026-03-13 00:50:40.538296 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:50:40.538301 | orchestrator | 2026-03-13 00:50:40.538307 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-03-13 00:50:40.538312 | orchestrator | Friday 13 March 2026 00:48:39 +0000 (0:00:00.501) 0:00:13.574 ********** 2026-03-13 00:50:40.538322 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:50:40.538327 | orchestrator | 2026-03-13 00:50:40.538332 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-03-13 00:50:40.538338 | orchestrator | Friday 13 March 2026 00:48:40 +0000 (0:00:00.529) 0:00:14.103 ********** 2026-03-13 00:50:40.538346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-13 00:50:40.538355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-13 00:50:40.538364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-13 00:50:40.538370 | orchestrator | 2026-03-13 00:50:40.538375 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-03-13 00:50:40.538381 | orchestrator | Friday 13 March 2026 00:48:41 +0000 (0:00:00.830) 0:00:14.934 ********** 2026-03-13 00:50:40.538390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-13 00:50:40.538400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-13 00:50:40.538407 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-13 00:50:40.538412 | orchestrator | 2026-03-13 00:50:40.538420 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-03-13 00:50:40.538426 | orchestrator | Friday 13 March 2026 00:48:42 +0000 (0:00:01.686) 0:00:16.621 ********** 2026-03-13 00:50:40.538431 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-13 00:50:40.538437 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-13 00:50:40.538442 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-13 00:50:40.538448 | orchestrator | 2026-03-13 00:50:40.538454 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-03-13 00:50:40.538459 | orchestrator | Friday 13 March 2026 00:48:44 +0000 (0:00:01.642) 0:00:18.264 ********** 2026-03-13 00:50:40.538465 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-13 00:50:40.538470 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-13 00:50:40.538479 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-13 00:50:40.538485 | orchestrator | 2026-03-13 00:50:40.538491 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-03-13 00:50:40.538498 | orchestrator | Friday 13 March 2026 00:48:47 +0000 (0:00:03.283) 0:00:21.547 ********** 2026-03-13 00:50:40.538504 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-13 00:50:40.538508 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-13 00:50:40.538513 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-13 00:50:40.538518 | orchestrator | 2026-03-13 00:50:40.538524 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-03-13 00:50:40.538529 | orchestrator | Friday 13 March 2026 00:48:49 +0000 (0:00:01.478) 0:00:23.025 ********** 2026-03-13 00:50:40.538535 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-13 00:50:40.538540 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-13 00:50:40.538546 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-13 00:50:40.538551 | orchestrator | 2026-03-13 00:50:40.538557 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-03-13 00:50:40.538562 | orchestrator | Friday 13 March 2026 00:48:52 +0000 (0:00:03.126) 0:00:26.151 ********** 2026-03-13 00:50:40.538568 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-13 00:50:40.538573 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-13 00:50:40.538579 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-13 00:50:40.538585 | orchestrator | 2026-03-13 00:50:40.538590 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-03-13 00:50:40.538595 | orchestrator | Friday 13 March 2026 00:48:54 +0000 (0:00:01.730) 0:00:27.882 ********** 2026-03-13 00:50:40.538601 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-13 00:50:40.538606 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-13 00:50:40.538612 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-13 00:50:40.538617 | orchestrator | 2026-03-13 00:50:40.538623 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-13 00:50:40.538629 | orchestrator | Friday 13 March 2026 00:48:57 +0000 (0:00:03.054) 0:00:30.936 ********** 2026-03-13 00:50:40.538634 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:50:40.538640 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:50:40.538645 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:50:40.538651 | orchestrator | 2026-03-13 00:50:40.538656 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-03-13 00:50:40.538662 | orchestrator | Friday 13 March 2026 00:48:58 +0000 (0:00:01.195) 0:00:32.132 ********** 2026-03-13 00:50:40.538670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-13 00:50:40.538682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-13 00:50:40.538689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-13 00:50:40.538695 | orchestrator | 2026-03-13 00:50:40.538701 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-03-13 00:50:40.538706 | orchestrator | Friday 13 March 2026 00:48:59 +0000 (0:00:01.272) 0:00:33.404 ********** 2026-03-13 00:50:40.538712 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:50:40.538717 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:50:40.538722 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:50:40.538728 | orchestrator | 2026-03-13 00:50:40.538734 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-03-13 00:50:40.538739 | orchestrator | Friday 13 March 2026 00:49:00 +0000 (0:00:00.996) 0:00:34.401 ********** 2026-03-13 00:50:40.538745 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:50:40.538750 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:50:40.538756 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:50:40.538761 | orchestrator | 2026-03-13 00:50:40.538766 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-03-13 00:50:40.538772 | orchestrator | Friday 13 March 2026 00:49:07 +0000 (0:00:06.505) 0:00:40.907 ********** 2026-03-13 00:50:40.538777 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:50:40.538782 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:50:40.538788 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:50:40.538793 | orchestrator | 2026-03-13 00:50:40.538799 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-13 00:50:40.538808 | orchestrator | 2026-03-13 00:50:40.538813 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-13 00:50:40.538819 | orchestrator | Friday 13 March 2026 00:49:07 +0000 (0:00:00.685) 0:00:41.592 ********** 2026-03-13 00:50:40.538825 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:50:40.538830 | orchestrator | 2026-03-13 00:50:40.538836 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-13 00:50:40.538842 | orchestrator | Friday 13 March 2026 00:49:08 +0000 (0:00:00.694) 0:00:42.286 ********** 2026-03-13 00:50:40.538847 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:50:40.538853 | orchestrator | 2026-03-13 00:50:40.538859 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-13 00:50:40.538864 | orchestrator | Friday 13 March 2026 00:49:08 +0000 (0:00:00.395) 0:00:42.681 ********** 2026-03-13 00:50:40.538870 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:50:40.538875 | orchestrator | 2026-03-13 00:50:40.538881 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-13 00:50:40.538886 | orchestrator | Friday 13 March 2026 00:49:15 +0000 (0:00:06.934) 0:00:49.616 ********** 2026-03-13 00:50:40.538892 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:50:40.538898 | orchestrator | 2026-03-13 00:50:40.538903 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-13 00:50:40.538909 | orchestrator | 2026-03-13 00:50:40.538914 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-13 00:50:40.538926 | orchestrator | Friday 13 March 2026 00:50:03 +0000 (0:00:47.184) 0:01:36.801 ********** 2026-03-13 00:50:40.538932 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:50:40.538938 | orchestrator | 2026-03-13 00:50:40.538943 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-13 00:50:40.538949 | orchestrator | Friday 13 March 2026 00:50:03 +0000 (0:00:00.562) 0:01:37.363 ********** 2026-03-13 00:50:40.538955 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:50:40.538960 | orchestrator | 2026-03-13 00:50:40.538966 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-13 00:50:40.538971 | orchestrator | Friday 13 March 2026 00:50:03 +0000 (0:00:00.214) 0:01:37.578 ********** 2026-03-13 00:50:40.538977 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:50:40.538982 | orchestrator | 2026-03-13 00:50:40.538988 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-13 00:50:40.538993 | orchestrator | Friday 13 March 2026 00:50:05 +0000 (0:00:01.586) 0:01:39.164 ********** 2026-03-13 00:50:40.538999 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:50:40.539004 | orchestrator | 2026-03-13 00:50:40.539010 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-13 00:50:40.539015 | orchestrator | 2026-03-13 00:50:40.539021 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-13 00:50:40.539026 | orchestrator | Friday 13 March 2026 00:50:18 +0000 (0:00:12.957) 0:01:52.122 ********** 2026-03-13 00:50:40.539031 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:50:40.539037 | orchestrator | 2026-03-13 00:50:40.539045 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-13 00:50:40.539051 | orchestrator | Friday 13 March 2026 00:50:19 +0000 (0:00:00.715) 0:01:52.837 ********** 2026-03-13 00:50:40.539056 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:50:40.539062 | orchestrator | 2026-03-13 00:50:40.539068 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-13 00:50:40.539073 | orchestrator | Friday 13 March 2026 00:50:19 +0000 (0:00:00.423) 0:01:53.260 ********** 2026-03-13 00:50:40.539079 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:50:40.539084 | orchestrator | 2026-03-13 00:50:40.539090 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-13 00:50:40.539094 | orchestrator | Friday 13 March 2026 00:50:26 +0000 (0:00:06.996) 0:02:00.257 ********** 2026-03-13 00:50:40.539099 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:50:40.539104 | orchestrator | 2026-03-13 00:50:40.539113 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-03-13 00:50:40.539118 | orchestrator | 2026-03-13 00:50:40.539123 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-03-13 00:50:40.539128 | orchestrator | Friday 13 March 2026 00:50:36 +0000 (0:00:10.148) 0:02:10.406 ********** 2026-03-13 00:50:40.539133 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:50:40.539139 | orchestrator | 2026-03-13 00:50:40.539144 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-03-13 00:50:40.539150 | orchestrator | Friday 13 March 2026 00:50:37 +0000 (0:00:00.452) 0:02:10.859 ********** 2026-03-13 00:50:40.539175 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:50:40.539182 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:50:40.539187 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:50:40.539230 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-13 00:50:40.539236 | orchestrator | enable_outward_rabbitmq_True 2026-03-13 00:50:40.539242 | orchestrator | 2026-03-13 00:50:40.539247 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-03-13 00:50:40.539253 | orchestrator | skipping: no hosts matched 2026-03-13 00:50:40.539258 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-13 00:50:40.539264 | orchestrator | outward_rabbitmq_restart 2026-03-13 00:50:40.539269 | orchestrator | 2026-03-13 00:50:40.539275 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-03-13 00:50:40.539280 | orchestrator | skipping: no hosts matched 2026-03-13 00:50:40.539286 | orchestrator | 2026-03-13 00:50:40.539291 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-03-13 00:50:40.539297 | orchestrator | skipping: no hosts matched 2026-03-13 00:50:40.539302 | orchestrator | 2026-03-13 00:50:40.539308 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 00:50:40.539314 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-13 00:50:40.539320 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-13 00:50:40.539326 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-13 00:50:40.539331 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-13 00:50:40.539337 | orchestrator | 2026-03-13 00:50:40.539342 | orchestrator | 2026-03-13 00:50:40.539348 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 00:50:40.539353 | orchestrator | Friday 13 March 2026 00:50:39 +0000 (0:00:02.789) 0:02:13.648 ********** 2026-03-13 00:50:40.539359 | orchestrator | =============================================================================== 2026-03-13 00:50:40.539364 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 70.29s 2026-03-13 00:50:40.539370 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 15.52s 2026-03-13 00:50:40.539375 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 6.51s 2026-03-13 00:50:40.539381 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.48s 2026-03-13 00:50:40.539386 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 3.28s 2026-03-13 00:50:40.539392 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 3.13s 2026-03-13 00:50:40.539398 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 3.05s 2026-03-13 00:50:40.539403 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.79s 2026-03-13 00:50:40.539409 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.97s 2026-03-13 00:50:40.539418 | orchestrator | rabbitmq : Catch when RabbitMQ is being downgraded ---------------------- 1.78s 2026-03-13 00:50:40.539424 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.73s 2026-03-13 00:50:40.539429 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.69s 2026-03-13 00:50:40.539435 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.64s 2026-03-13 00:50:40.539440 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.48s 2026-03-13 00:50:40.539446 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.27s 2026-03-13 00:50:40.539451 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.20s 2026-03-13 00:50:40.539457 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.17s 2026-03-13 00:50:40.539466 | orchestrator | rabbitmq : Get new RabbitMQ version ------------------------------------- 1.15s 2026-03-13 00:50:40.539471 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 1.03s 2026-03-13 00:50:40.539477 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.00s 2026-03-13 00:50:40.539483 | orchestrator | 2026-03-13 00:50:40 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:50:40.539753 | orchestrator | 2026-03-13 00:50:40 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:50:40.539764 | orchestrator | 2026-03-13 00:50:40 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:50:43.568381 | orchestrator | 2026-03-13 00:50:43 | INFO  | Task f6abaf64-8ae6-4187-9b2a-8de1335f9d32 is in state STARTED 2026-03-13 00:50:43.572096 | orchestrator | 2026-03-13 00:50:43 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:50:43.573838 | orchestrator | 2026-03-13 00:50:43 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:50:43.573887 | orchestrator | 2026-03-13 00:50:43 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:50:46.618224 | orchestrator | 2026-03-13 00:50:46 | INFO  | Task f6abaf64-8ae6-4187-9b2a-8de1335f9d32 is in state STARTED 2026-03-13 00:50:46.618283 | orchestrator | 2026-03-13 00:50:46 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:50:46.619778 | orchestrator | 2026-03-13 00:50:46 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:50:46.619965 | orchestrator | 2026-03-13 00:50:46 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:50:49.654745 | orchestrator | 2026-03-13 00:50:49 | INFO  | Task f6abaf64-8ae6-4187-9b2a-8de1335f9d32 is in state STARTED 2026-03-13 00:50:49.655724 | orchestrator | 2026-03-13 00:50:49 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:50:49.656782 | orchestrator | 2026-03-13 00:50:49 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:50:49.656880 | orchestrator | 2026-03-13 00:50:49 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:50:52.700670 | orchestrator | 2026-03-13 00:50:52 | INFO  | Task f6abaf64-8ae6-4187-9b2a-8de1335f9d32 is in state STARTED 2026-03-13 00:50:52.704974 | orchestrator | 2026-03-13 00:50:52 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:50:52.706528 | orchestrator | 2026-03-13 00:50:52 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:50:52.706627 | orchestrator | 2026-03-13 00:50:52 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:50:55.755132 | orchestrator | 2026-03-13 00:50:55 | INFO  | Task f6abaf64-8ae6-4187-9b2a-8de1335f9d32 is in state STARTED 2026-03-13 00:50:55.757194 | orchestrator | 2026-03-13 00:50:55 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:50:55.758192 | orchestrator | 2026-03-13 00:50:55 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:50:55.758892 | orchestrator | 2026-03-13 00:50:55 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:50:58.809246 | orchestrator | 2026-03-13 00:50:58 | INFO  | Task f6abaf64-8ae6-4187-9b2a-8de1335f9d32 is in state STARTED 2026-03-13 00:50:58.809659 | orchestrator | 2026-03-13 00:50:58 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:50:58.813541 | orchestrator | 2026-03-13 00:50:58 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:50:58.813585 | orchestrator | 2026-03-13 00:50:58 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:51:01.854593 | orchestrator | 2026-03-13 00:51:01 | INFO  | Task f6abaf64-8ae6-4187-9b2a-8de1335f9d32 is in state STARTED 2026-03-13 00:51:01.857400 | orchestrator | 2026-03-13 00:51:01 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:51:01.860081 | orchestrator | 2026-03-13 00:51:01 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:51:01.860291 | orchestrator | 2026-03-13 00:51:01 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:51:04.903826 | orchestrator | 2026-03-13 00:51:04 | INFO  | Task f6abaf64-8ae6-4187-9b2a-8de1335f9d32 is in state STARTED 2026-03-13 00:51:04.905556 | orchestrator | 2026-03-13 00:51:04 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:51:04.907876 | orchestrator | 2026-03-13 00:51:04 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:51:04.908387 | orchestrator | 2026-03-13 00:51:04 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:51:07.936646 | orchestrator | 2026-03-13 00:51:07 | INFO  | Task f6abaf64-8ae6-4187-9b2a-8de1335f9d32 is in state STARTED 2026-03-13 00:51:07.937335 | orchestrator | 2026-03-13 00:51:07 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:51:07.938138 | orchestrator | 2026-03-13 00:51:07 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:51:07.938282 | orchestrator | 2026-03-13 00:51:07 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:51:10.970590 | orchestrator | 2026-03-13 00:51:10 | INFO  | Task f6abaf64-8ae6-4187-9b2a-8de1335f9d32 is in state STARTED 2026-03-13 00:51:10.972946 | orchestrator | 2026-03-13 00:51:10 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:51:10.975823 | orchestrator | 2026-03-13 00:51:10 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:51:10.975876 | orchestrator | 2026-03-13 00:51:10 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:51:14.025472 | orchestrator | 2026-03-13 00:51:14 | INFO  | Task f6abaf64-8ae6-4187-9b2a-8de1335f9d32 is in state STARTED 2026-03-13 00:51:14.027333 | orchestrator | 2026-03-13 00:51:14 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:51:14.030133 | orchestrator | 2026-03-13 00:51:14 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:51:14.030246 | orchestrator | 2026-03-13 00:51:14 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:51:17.062795 | orchestrator | 2026-03-13 00:51:17 | INFO  | Task f6abaf64-8ae6-4187-9b2a-8de1335f9d32 is in state STARTED 2026-03-13 00:51:17.063306 | orchestrator | 2026-03-13 00:51:17 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:51:17.064309 | orchestrator | 2026-03-13 00:51:17 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:51:17.064971 | orchestrator | 2026-03-13 00:51:17 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:51:20.098847 | orchestrator | 2026-03-13 00:51:20 | INFO  | Task f6abaf64-8ae6-4187-9b2a-8de1335f9d32 is in state STARTED 2026-03-13 00:51:20.099391 | orchestrator | 2026-03-13 00:51:20 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:51:20.100223 | orchestrator | 2026-03-13 00:51:20 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:51:20.100267 | orchestrator | 2026-03-13 00:51:20 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:51:23.140793 | orchestrator | 2026-03-13 00:51:23 | INFO  | Task f6abaf64-8ae6-4187-9b2a-8de1335f9d32 is in state STARTED 2026-03-13 00:51:23.143044 | orchestrator | 2026-03-13 00:51:23 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:51:23.147559 | orchestrator | 2026-03-13 00:51:23 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:51:23.147613 | orchestrator | 2026-03-13 00:51:23 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:51:26.183796 | orchestrator | 2026-03-13 00:51:26 | INFO  | Task f6abaf64-8ae6-4187-9b2a-8de1335f9d32 is in state STARTED 2026-03-13 00:51:26.185178 | orchestrator | 2026-03-13 00:51:26 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:51:26.189528 | orchestrator | 2026-03-13 00:51:26 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:51:26.189592 | orchestrator | 2026-03-13 00:51:26 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:51:29.228458 | orchestrator | 2026-03-13 00:51:29.228549 | orchestrator | 2026-03-13 00:51:29 | INFO  | Task f6abaf64-8ae6-4187-9b2a-8de1335f9d32 is in state SUCCESS 2026-03-13 00:51:29.229705 | orchestrator | 2026-03-13 00:51:29.229748 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-13 00:51:29.229756 | orchestrator | 2026-03-13 00:51:29.229961 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-13 00:51:29.229974 | orchestrator | Friday 13 March 2026 00:49:11 +0000 (0:00:00.179) 0:00:00.179 ********** 2026-03-13 00:51:29.229979 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:51:29.229984 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:51:29.229988 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:51:29.229992 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:51:29.229996 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:51:29.229999 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:51:29.230003 | orchestrator | 2026-03-13 00:51:29.230008 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-13 00:51:29.230060 | orchestrator | Friday 13 March 2026 00:49:12 +0000 (0:00:01.097) 0:00:01.277 ********** 2026-03-13 00:51:29.230066 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-03-13 00:51:29.230071 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-03-13 00:51:29.230075 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-03-13 00:51:29.230079 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-03-13 00:51:29.230090 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-03-13 00:51:29.230095 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-03-13 00:51:29.230098 | orchestrator | 2026-03-13 00:51:29.230103 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-03-13 00:51:29.230106 | orchestrator | 2026-03-13 00:51:29.230110 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-03-13 00:51:29.230114 | orchestrator | Friday 13 March 2026 00:49:14 +0000 (0:00:01.869) 0:00:03.146 ********** 2026-03-13 00:51:29.230181 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:51:29.230192 | orchestrator | 2026-03-13 00:51:29.230201 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-03-13 00:51:29.230209 | orchestrator | Friday 13 March 2026 00:49:16 +0000 (0:00:01.834) 0:00:04.981 ********** 2026-03-13 00:51:29.230217 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.230227 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.230251 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.230257 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.230263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.230276 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.230283 | orchestrator | 2026-03-13 00:51:29.230302 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-03-13 00:51:29.230308 | orchestrator | Friday 13 March 2026 00:49:17 +0000 (0:00:01.688) 0:00:06.670 ********** 2026-03-13 00:51:29.230314 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.230320 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.230334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.230341 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.230347 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.230354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.230358 | orchestrator | 2026-03-13 00:51:29.230362 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-03-13 00:51:29.230366 | orchestrator | Friday 13 March 2026 00:49:20 +0000 (0:00:02.029) 0:00:08.699 ********** 2026-03-13 00:51:29.230369 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.230373 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.230386 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.230390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.230397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.230445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.230449 | orchestrator | 2026-03-13 00:51:29.230453 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-03-13 00:51:29.230457 | orchestrator | Friday 13 March 2026 00:49:21 +0000 (0:00:01.238) 0:00:09.938 ********** 2026-03-13 00:51:29.230461 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.230465 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.230469 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.230472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.230476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.230483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.230487 | orchestrator | 2026-03-13 00:51:29.230494 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-03-13 00:51:29.230503 | orchestrator | Friday 13 March 2026 00:49:22 +0000 (0:00:01.680) 0:00:11.618 ********** 2026-03-13 00:51:29.230507 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.230511 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.230515 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.230519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.230523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.230526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.230530 | orchestrator | 2026-03-13 00:51:29.230534 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-03-13 00:51:29.230538 | orchestrator | Friday 13 March 2026 00:49:24 +0000 (0:00:01.268) 0:00:12.887 ********** 2026-03-13 00:51:29.230542 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:51:29.230546 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:51:29.230550 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:51:29.230554 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:51:29.230557 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:51:29.230561 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:51:29.230565 | orchestrator | 2026-03-13 00:51:29.230569 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-03-13 00:51:29.230573 | orchestrator | Friday 13 March 2026 00:49:26 +0000 (0:00:02.310) 0:00:15.197 ********** 2026-03-13 00:51:29.230576 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-03-13 00:51:29.230581 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-03-13 00:51:29.230585 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-03-13 00:51:29.230592 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-03-13 00:51:29.230596 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-03-13 00:51:29.230602 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-03-13 00:51:29.230606 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-13 00:51:29.230610 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-13 00:51:29.230616 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-13 00:51:29.230620 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-13 00:51:29.230624 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-13 00:51:29.230628 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-13 00:51:29.230632 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-13 00:51:29.230637 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-13 00:51:29.230641 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-13 00:51:29.230645 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-13 00:51:29.230649 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-13 00:51:29.230653 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-13 00:51:29.230657 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-13 00:51:29.230662 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-13 00:51:29.230666 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-13 00:51:29.230670 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-13 00:51:29.230673 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-13 00:51:29.230677 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-13 00:51:29.230681 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-13 00:51:29.230685 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-13 00:51:29.230689 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-13 00:51:29.230692 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-13 00:51:29.230696 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-13 00:51:29.230700 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-13 00:51:29.230704 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-13 00:51:29.230708 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-13 00:51:29.230717 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-13 00:51:29.230721 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-13 00:51:29.230725 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-13 00:51:29.230729 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-13 00:51:29.230733 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-13 00:51:29.230737 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-13 00:51:29.230741 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-13 00:51:29.230744 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-13 00:51:29.230748 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-13 00:51:29.230752 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-13 00:51:29.230758 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-03-13 00:51:29.230762 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-03-13 00:51:29.230769 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-03-13 00:51:29.230773 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-03-13 00:51:29.230777 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-03-13 00:51:29.230780 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-03-13 00:51:29.230784 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-13 00:51:29.230788 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-13 00:51:29.230792 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-13 00:51:29.230795 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-13 00:51:29.230799 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-13 00:51:29.230803 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-13 00:51:29.230807 | orchestrator | 2026-03-13 00:51:29.230811 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-13 00:51:29.230815 | orchestrator | Friday 13 March 2026 00:49:45 +0000 (0:00:19.378) 0:00:34.576 ********** 2026-03-13 00:51:29.230818 | orchestrator | 2026-03-13 00:51:29.230822 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-13 00:51:29.230826 | orchestrator | Friday 13 March 2026 00:49:45 +0000 (0:00:00.083) 0:00:34.659 ********** 2026-03-13 00:51:29.230830 | orchestrator | 2026-03-13 00:51:29.230834 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-13 00:51:29.230837 | orchestrator | Friday 13 March 2026 00:49:46 +0000 (0:00:00.099) 0:00:34.758 ********** 2026-03-13 00:51:29.230841 | orchestrator | 2026-03-13 00:51:29.230847 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-13 00:51:29.230851 | orchestrator | Friday 13 March 2026 00:49:46 +0000 (0:00:00.067) 0:00:34.826 ********** 2026-03-13 00:51:29.230855 | orchestrator | 2026-03-13 00:51:29.230859 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-13 00:51:29.230862 | orchestrator | Friday 13 March 2026 00:49:46 +0000 (0:00:00.066) 0:00:34.893 ********** 2026-03-13 00:51:29.230866 | orchestrator | 2026-03-13 00:51:29.230870 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-13 00:51:29.230874 | orchestrator | Friday 13 March 2026 00:49:46 +0000 (0:00:00.065) 0:00:34.959 ********** 2026-03-13 00:51:29.230877 | orchestrator | 2026-03-13 00:51:29.230881 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-03-13 00:51:29.230885 | orchestrator | Friday 13 March 2026 00:49:46 +0000 (0:00:00.066) 0:00:35.025 ********** 2026-03-13 00:51:29.230889 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:51:29.230892 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:51:29.230896 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:51:29.230900 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:51:29.230904 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:51:29.230907 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:51:29.230911 | orchestrator | 2026-03-13 00:51:29.230915 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-03-13 00:51:29.230921 | orchestrator | Friday 13 March 2026 00:49:47 +0000 (0:00:01.637) 0:00:36.663 ********** 2026-03-13 00:51:29.230927 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:51:29.230933 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:51:29.230940 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:51:29.230946 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:51:29.230951 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:51:29.230961 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:51:29.230968 | orchestrator | 2026-03-13 00:51:29.230974 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-03-13 00:51:29.230979 | orchestrator | 2026-03-13 00:51:29.230985 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-13 00:51:29.230992 | orchestrator | Friday 13 March 2026 00:50:12 +0000 (0:00:24.041) 0:01:00.705 ********** 2026-03-13 00:51:29.230998 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:51:29.231004 | orchestrator | 2026-03-13 00:51:29.231009 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-13 00:51:29.231015 | orchestrator | Friday 13 March 2026 00:50:13 +0000 (0:00:01.395) 0:01:02.100 ********** 2026-03-13 00:51:29.231021 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:51:29.231027 | orchestrator | 2026-03-13 00:51:29.231033 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-03-13 00:51:29.231039 | orchestrator | Friday 13 March 2026 00:50:14 +0000 (0:00:01.043) 0:01:03.144 ********** 2026-03-13 00:51:29.231044 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:51:29.231054 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:51:29.231060 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:51:29.231066 | orchestrator | 2026-03-13 00:51:29.231071 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-03-13 00:51:29.231077 | orchestrator | Friday 13 March 2026 00:50:15 +0000 (0:00:01.238) 0:01:04.382 ********** 2026-03-13 00:51:29.231082 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:51:29.231088 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:51:29.231095 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:51:29.231100 | orchestrator | 2026-03-13 00:51:29.231110 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-03-13 00:51:29.231116 | orchestrator | Friday 13 March 2026 00:50:15 +0000 (0:00:00.233) 0:01:04.616 ********** 2026-03-13 00:51:29.231143 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:51:29.231153 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:51:29.231157 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:51:29.231161 | orchestrator | 2026-03-13 00:51:29.231166 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-03-13 00:51:29.231170 | orchestrator | Friday 13 March 2026 00:50:16 +0000 (0:00:00.308) 0:01:04.925 ********** 2026-03-13 00:51:29.231174 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:51:29.231178 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:51:29.231182 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:51:29.231187 | orchestrator | 2026-03-13 00:51:29.231191 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-03-13 00:51:29.231195 | orchestrator | Friday 13 March 2026 00:50:16 +0000 (0:00:00.470) 0:01:05.395 ********** 2026-03-13 00:51:29.231200 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:51:29.231204 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:51:29.231208 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:51:29.231212 | orchestrator | 2026-03-13 00:51:29.231217 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-03-13 00:51:29.231221 | orchestrator | Friday 13 March 2026 00:50:17 +0000 (0:00:00.417) 0:01:05.813 ********** 2026-03-13 00:51:29.231225 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:51:29.231229 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:51:29.231234 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:51:29.231302 | orchestrator | 2026-03-13 00:51:29.231309 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-03-13 00:51:29.231314 | orchestrator | Friday 13 March 2026 00:50:17 +0000 (0:00:00.253) 0:01:06.067 ********** 2026-03-13 00:51:29.231320 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:51:29.231326 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:51:29.231333 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:51:29.231339 | orchestrator | 2026-03-13 00:51:29.231345 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-03-13 00:51:29.231352 | orchestrator | Friday 13 March 2026 00:50:17 +0000 (0:00:00.297) 0:01:06.364 ********** 2026-03-13 00:51:29.231358 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:51:29.231365 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:51:29.231371 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:51:29.231378 | orchestrator | 2026-03-13 00:51:29.231384 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-03-13 00:51:29.231389 | orchestrator | Friday 13 March 2026 00:50:17 +0000 (0:00:00.260) 0:01:06.625 ********** 2026-03-13 00:51:29.231395 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:51:29.231402 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:51:29.231408 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:51:29.231414 | orchestrator | 2026-03-13 00:51:29.231422 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-03-13 00:51:29.231426 | orchestrator | Friday 13 March 2026 00:50:18 +0000 (0:00:00.520) 0:01:07.145 ********** 2026-03-13 00:51:29.231429 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:51:29.231433 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:51:29.231437 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:51:29.231440 | orchestrator | 2026-03-13 00:51:29.231444 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-03-13 00:51:29.231448 | orchestrator | Friday 13 March 2026 00:50:18 +0000 (0:00:00.326) 0:01:07.472 ********** 2026-03-13 00:51:29.231452 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:51:29.231455 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:51:29.231459 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:51:29.231462 | orchestrator | 2026-03-13 00:51:29.231466 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-03-13 00:51:29.231470 | orchestrator | Friday 13 March 2026 00:50:19 +0000 (0:00:00.303) 0:01:07.776 ********** 2026-03-13 00:51:29.231473 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:51:29.231477 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:51:29.231489 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:51:29.231493 | orchestrator | 2026-03-13 00:51:29.231497 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-03-13 00:51:29.231501 | orchestrator | Friday 13 March 2026 00:50:19 +0000 (0:00:00.337) 0:01:08.113 ********** 2026-03-13 00:51:29.231505 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:51:29.231508 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:51:29.231512 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:51:29.231515 | orchestrator | 2026-03-13 00:51:29.231519 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-03-13 00:51:29.231523 | orchestrator | Friday 13 March 2026 00:50:19 +0000 (0:00:00.533) 0:01:08.647 ********** 2026-03-13 00:51:29.231527 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:51:29.231530 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:51:29.231534 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:51:29.231538 | orchestrator | 2026-03-13 00:51:29.231541 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-03-13 00:51:29.231545 | orchestrator | Friday 13 March 2026 00:50:20 +0000 (0:00:00.854) 0:01:09.501 ********** 2026-03-13 00:51:29.231549 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:51:29.231552 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:51:29.231556 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:51:29.231560 | orchestrator | 2026-03-13 00:51:29.231563 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-03-13 00:51:29.231567 | orchestrator | Friday 13 March 2026 00:50:21 +0000 (0:00:00.427) 0:01:09.929 ********** 2026-03-13 00:51:29.231571 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:51:29.231578 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:51:29.231582 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:51:29.231585 | orchestrator | 2026-03-13 00:51:29.231589 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-03-13 00:51:29.231593 | orchestrator | Friday 13 March 2026 00:50:21 +0000 (0:00:00.276) 0:01:10.205 ********** 2026-03-13 00:51:29.231596 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:51:29.231600 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:51:29.231608 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:51:29.231612 | orchestrator | 2026-03-13 00:51:29.231616 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-13 00:51:29.231620 | orchestrator | Friday 13 March 2026 00:50:21 +0000 (0:00:00.381) 0:01:10.587 ********** 2026-03-13 00:51:29.231624 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:51:29.231627 | orchestrator | 2026-03-13 00:51:29.231631 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-03-13 00:51:29.231635 | orchestrator | Friday 13 March 2026 00:50:22 +0000 (0:00:00.756) 0:01:11.344 ********** 2026-03-13 00:51:29.231638 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:51:29.231642 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:51:29.231646 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:51:29.231650 | orchestrator | 2026-03-13 00:51:29.231653 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-03-13 00:51:29.231657 | orchestrator | Friday 13 March 2026 00:50:23 +0000 (0:00:00.697) 0:01:12.041 ********** 2026-03-13 00:51:29.231661 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:51:29.231664 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:51:29.231668 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:51:29.231672 | orchestrator | 2026-03-13 00:51:29.231675 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-03-13 00:51:29.231679 | orchestrator | Friday 13 March 2026 00:50:23 +0000 (0:00:00.464) 0:01:12.506 ********** 2026-03-13 00:51:29.231683 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:51:29.231687 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:51:29.231690 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:51:29.231694 | orchestrator | 2026-03-13 00:51:29.231700 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-03-13 00:51:29.231709 | orchestrator | Friday 13 March 2026 00:50:24 +0000 (0:00:00.462) 0:01:12.968 ********** 2026-03-13 00:51:29.231715 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:51:29.231720 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:51:29.231725 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:51:29.231730 | orchestrator | 2026-03-13 00:51:29.231736 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-03-13 00:51:29.231741 | orchestrator | Friday 13 March 2026 00:50:24 +0000 (0:00:00.277) 0:01:13.246 ********** 2026-03-13 00:51:29.231747 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:51:29.231752 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:51:29.231757 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:51:29.231762 | orchestrator | 2026-03-13 00:51:29.231768 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-03-13 00:51:29.231773 | orchestrator | Friday 13 March 2026 00:50:24 +0000 (0:00:00.226) 0:01:13.472 ********** 2026-03-13 00:51:29.231779 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:51:29.231785 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:51:29.231792 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:51:29.231798 | orchestrator | 2026-03-13 00:51:29.231805 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-03-13 00:51:29.231811 | orchestrator | Friday 13 March 2026 00:50:25 +0000 (0:00:00.275) 0:01:13.748 ********** 2026-03-13 00:51:29.231818 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:51:29.231824 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:51:29.231830 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:51:29.231836 | orchestrator | 2026-03-13 00:51:29.231842 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-03-13 00:51:29.231845 | orchestrator | Friday 13 March 2026 00:50:25 +0000 (0:00:00.873) 0:01:14.621 ********** 2026-03-13 00:51:29.231849 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:51:29.231853 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:51:29.231856 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:51:29.231860 | orchestrator | 2026-03-13 00:51:29.231864 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-13 00:51:29.231868 | orchestrator | Friday 13 March 2026 00:50:26 +0000 (0:00:00.599) 0:01:15.220 ********** 2026-03-13 00:51:29.231872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.231883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.231887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.231897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.231903 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.231914 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.231920 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.231926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.231932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.231939 | orchestrator | 2026-03-13 00:51:29.231945 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-13 00:51:29.231951 | orchestrator | Friday 13 March 2026 00:50:28 +0000 (0:00:02.124) 0:01:17.344 ********** 2026-03-13 00:51:29.231959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.231963 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.231967 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.231970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.231980 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.231988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.231992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.231995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.231999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.232003 | orchestrator | 2026-03-13 00:51:29.232009 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-13 00:51:29.232015 | orchestrator | Friday 13 March 2026 00:50:33 +0000 (0:00:04.586) 0:01:21.931 ********** 2026-03-13 00:51:29.232020 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.232026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.232032 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.232038 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.232043 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.232060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.232068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.232072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.232076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.232080 | orchestrator | 2026-03-13 00:51:29.232084 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-13 00:51:29.232087 | orchestrator | Friday 13 March 2026 00:50:35 +0000 (0:00:02.516) 0:01:24.448 ********** 2026-03-13 00:51:29.232091 | orchestrator | 2026-03-13 00:51:29.232095 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-13 00:51:29.232099 | orchestrator | Friday 13 March 2026 00:50:35 +0000 (0:00:00.061) 0:01:24.510 ********** 2026-03-13 00:51:29.232102 | orchestrator | 2026-03-13 00:51:29.232106 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-13 00:51:29.232110 | orchestrator | Friday 13 March 2026 00:50:35 +0000 (0:00:00.062) 0:01:24.572 ********** 2026-03-13 00:51:29.232113 | orchestrator | 2026-03-13 00:51:29.232117 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-13 00:51:29.232138 | orchestrator | Friday 13 March 2026 00:50:35 +0000 (0:00:00.064) 0:01:24.637 ********** 2026-03-13 00:51:29.232142 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:51:29.232146 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:51:29.232150 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:51:29.232153 | orchestrator | 2026-03-13 00:51:29.232157 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-13 00:51:29.232161 | orchestrator | Friday 13 March 2026 00:50:42 +0000 (0:00:06.621) 0:01:31.258 ********** 2026-03-13 00:51:29.232165 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:51:29.232168 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:51:29.232172 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:51:29.232176 | orchestrator | 2026-03-13 00:51:29.232180 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-13 00:51:29.232183 | orchestrator | Friday 13 March 2026 00:50:44 +0000 (0:00:02.135) 0:01:33.394 ********** 2026-03-13 00:51:29.232187 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:51:29.232191 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:51:29.232195 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:51:29.232202 | orchestrator | 2026-03-13 00:51:29.232206 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-13 00:51:29.232209 | orchestrator | Friday 13 March 2026 00:50:51 +0000 (0:00:07.265) 0:01:40.659 ********** 2026-03-13 00:51:29.232213 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:51:29.232217 | orchestrator | 2026-03-13 00:51:29.232221 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-13 00:51:29.232224 | orchestrator | Friday 13 March 2026 00:50:52 +0000 (0:00:00.293) 0:01:40.953 ********** 2026-03-13 00:51:29.232228 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:51:29.232232 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:51:29.232235 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:51:29.232239 | orchestrator | 2026-03-13 00:51:29.232243 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-13 00:51:29.232247 | orchestrator | Friday 13 March 2026 00:50:53 +0000 (0:00:00.757) 0:01:41.710 ********** 2026-03-13 00:51:29.232253 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:51:29.232258 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:51:29.232264 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:51:29.232269 | orchestrator | 2026-03-13 00:51:29.232274 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-13 00:51:29.232279 | orchestrator | Friday 13 March 2026 00:50:53 +0000 (0:00:00.559) 0:01:42.270 ********** 2026-03-13 00:51:29.232284 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:51:29.232290 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:51:29.232295 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:51:29.232300 | orchestrator | 2026-03-13 00:51:29.232305 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-13 00:51:29.232310 | orchestrator | Friday 13 March 2026 00:50:54 +0000 (0:00:00.735) 0:01:43.005 ********** 2026-03-13 00:51:29.232319 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:51:29.232324 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:51:29.232329 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:51:29.232334 | orchestrator | 2026-03-13 00:51:29.232339 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-13 00:51:29.232346 | orchestrator | Friday 13 March 2026 00:50:55 +0000 (0:00:00.771) 0:01:43.777 ********** 2026-03-13 00:51:29.232351 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:51:29.232357 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:51:29.232366 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:51:29.232371 | orchestrator | 2026-03-13 00:51:29.232377 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-13 00:51:29.232383 | orchestrator | Friday 13 March 2026 00:50:56 +0000 (0:00:00.935) 0:01:44.713 ********** 2026-03-13 00:51:29.232388 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:51:29.232393 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:51:29.232399 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:51:29.232405 | orchestrator | 2026-03-13 00:51:29.232410 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-03-13 00:51:29.232416 | orchestrator | Friday 13 March 2026 00:50:56 +0000 (0:00:00.844) 0:01:45.557 ********** 2026-03-13 00:51:29.232422 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:51:29.232427 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:51:29.232433 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:51:29.232440 | orchestrator | 2026-03-13 00:51:29.232445 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-13 00:51:29.232450 | orchestrator | Friday 13 March 2026 00:50:57 +0000 (0:00:00.301) 0:01:45.858 ********** 2026-03-13 00:51:29.232456 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.232462 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.232475 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.232483 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.232492 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.232497 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.232503 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.232512 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.232528 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.232534 | orchestrator | 2026-03-13 00:51:29.232540 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-13 00:51:29.232546 | orchestrator | Friday 13 March 2026 00:50:58 +0000 (0:00:01.385) 0:01:47.244 ********** 2026-03-13 00:51:29.232552 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.232558 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.232568 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.232574 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.232579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.232586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.232591 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.232597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.232603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.232609 | orchestrator | 2026-03-13 00:51:29.232616 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-13 00:51:29.232622 | orchestrator | Friday 13 March 2026 00:51:02 +0000 (0:00:03.844) 0:01:51.088 ********** 2026-03-13 00:51:29.232634 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.232640 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.232652 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.232658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.232725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.232745 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.232752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.232758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.232764 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 00:51:29.232772 | orchestrator | 2026-03-13 00:51:29.232776 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-13 00:51:29.232780 | orchestrator | Friday 13 March 2026 00:51:05 +0000 (0:00:02.683) 0:01:53.772 ********** 2026-03-13 00:51:29.232784 | orchestrator | 2026-03-13 00:51:29.232788 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-13 00:51:29.232792 | orchestrator | Friday 13 March 2026 00:51:05 +0000 (0:00:00.061) 0:01:53.833 ********** 2026-03-13 00:51:29.232795 | orchestrator | 2026-03-13 00:51:29.232799 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-13 00:51:29.232806 | orchestrator | Friday 13 March 2026 00:51:05 +0000 (0:00:00.064) 0:01:53.898 ********** 2026-03-13 00:51:29.232812 | orchestrator | 2026-03-13 00:51:29.232818 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-13 00:51:29.232824 | orchestrator | Friday 13 March 2026 00:51:05 +0000 (0:00:00.060) 0:01:53.958 ********** 2026-03-13 00:51:29.232836 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:51:29.232841 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:51:29.232847 | orchestrator | 2026-03-13 00:51:29.232859 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-13 00:51:29.232865 | orchestrator | Friday 13 March 2026 00:51:11 +0000 (0:00:06.076) 0:02:00.035 ********** 2026-03-13 00:51:29.232871 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:51:29.232877 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:51:29.232884 | orchestrator | 2026-03-13 00:51:29.232890 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-13 00:51:29.232896 | orchestrator | Friday 13 March 2026 00:51:17 +0000 (0:00:06.333) 0:02:06.368 ********** 2026-03-13 00:51:29.232902 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:51:29.232909 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:51:29.232914 | orchestrator | 2026-03-13 00:51:29.232918 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-13 00:51:29.232922 | orchestrator | Friday 13 March 2026 00:51:23 +0000 (0:00:06.297) 0:02:12.666 ********** 2026-03-13 00:51:29.232925 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:51:29.232929 | orchestrator | 2026-03-13 00:51:29.232933 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-13 00:51:29.232937 | orchestrator | Friday 13 March 2026 00:51:24 +0000 (0:00:00.114) 0:02:12.780 ********** 2026-03-13 00:51:29.232940 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:51:29.232944 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:51:29.232948 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:51:29.232952 | orchestrator | 2026-03-13 00:51:29.232955 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-13 00:51:29.232961 | orchestrator | Friday 13 March 2026 00:51:24 +0000 (0:00:00.756) 0:02:13.536 ********** 2026-03-13 00:51:29.232966 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:51:29.232972 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:51:29.232978 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:51:29.232984 | orchestrator | 2026-03-13 00:51:29.232990 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-13 00:51:29.232997 | orchestrator | Friday 13 March 2026 00:51:25 +0000 (0:00:00.684) 0:02:14.221 ********** 2026-03-13 00:51:29.233003 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:51:29.233009 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:51:29.233015 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:51:29.233021 | orchestrator | 2026-03-13 00:51:29.233027 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-13 00:51:29.233034 | orchestrator | Friday 13 March 2026 00:51:26 +0000 (0:00:00.732) 0:02:14.953 ********** 2026-03-13 00:51:29.233040 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:51:29.233045 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:51:29.233049 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:51:29.233054 | orchestrator | 2026-03-13 00:51:29.233061 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-13 00:51:29.233066 | orchestrator | Friday 13 March 2026 00:51:26 +0000 (0:00:00.675) 0:02:15.629 ********** 2026-03-13 00:51:29.233072 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:51:29.233078 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:51:29.233084 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:51:29.233090 | orchestrator | 2026-03-13 00:51:29.233095 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-13 00:51:29.233101 | orchestrator | Friday 13 March 2026 00:51:27 +0000 (0:00:00.674) 0:02:16.304 ********** 2026-03-13 00:51:29.233107 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:51:29.233113 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:51:29.233119 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:51:29.233148 | orchestrator | 2026-03-13 00:51:29.233155 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 00:51:29.233161 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-13 00:51:29.233173 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-13 00:51:29.233179 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-13 00:51:29.233186 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 00:51:29.233192 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 00:51:29.233198 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 00:51:29.233205 | orchestrator | 2026-03-13 00:51:29.233212 | orchestrator | 2026-03-13 00:51:29.233219 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 00:51:29.233223 | orchestrator | Friday 13 March 2026 00:51:28 +0000 (0:00:01.091) 0:02:17.396 ********** 2026-03-13 00:51:29.233227 | orchestrator | =============================================================================== 2026-03-13 00:51:29.233230 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 24.04s 2026-03-13 00:51:29.233234 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 19.38s 2026-03-13 00:51:29.233238 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 13.56s 2026-03-13 00:51:29.233245 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 12.70s 2026-03-13 00:51:29.233249 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 8.47s 2026-03-13 00:51:29.233253 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.59s 2026-03-13 00:51:29.233256 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.84s 2026-03-13 00:51:29.233263 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.68s 2026-03-13 00:51:29.233267 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.52s 2026-03-13 00:51:29.233271 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.31s 2026-03-13 00:51:29.233274 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 2.12s 2026-03-13 00:51:29.233278 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.03s 2026-03-13 00:51:29.233282 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.87s 2026-03-13 00:51:29.233286 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.83s 2026-03-13 00:51:29.233289 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.69s 2026-03-13 00:51:29.233293 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.68s 2026-03-13 00:51:29.233297 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.64s 2026-03-13 00:51:29.233301 | orchestrator | ovn-db : include_tasks -------------------------------------------------- 1.40s 2026-03-13 00:51:29.233304 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.39s 2026-03-13 00:51:29.233308 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.27s 2026-03-13 00:51:29.233312 | orchestrator | 2026-03-13 00:51:29 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:51:29.233316 | orchestrator | 2026-03-13 00:51:29 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:51:29.233320 | orchestrator | 2026-03-13 00:51:29 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:51:32.260485 | orchestrator | 2026-03-13 00:51:32 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:51:32.260612 | orchestrator | 2026-03-13 00:51:32 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:51:32.260624 | orchestrator | 2026-03-13 00:51:32 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:51:35.297583 | orchestrator | 2026-03-13 00:51:35 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:51:35.299678 | orchestrator | 2026-03-13 00:51:35 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:51:35.299769 | orchestrator | 2026-03-13 00:51:35 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:51:38.345021 | orchestrator | 2026-03-13 00:51:38 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:51:38.346947 | orchestrator | 2026-03-13 00:51:38 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:51:38.347040 | orchestrator | 2026-03-13 00:51:38 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:51:41.384432 | orchestrator | 2026-03-13 00:51:41 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:51:41.385943 | orchestrator | 2026-03-13 00:51:41 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:51:41.386006 | orchestrator | 2026-03-13 00:51:41 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:51:44.434901 | orchestrator | 2026-03-13 00:51:44 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:51:44.436351 | orchestrator | 2026-03-13 00:51:44 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:51:44.436397 | orchestrator | 2026-03-13 00:51:44 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:51:47.478456 | orchestrator | 2026-03-13 00:51:47 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:51:47.480435 | orchestrator | 2026-03-13 00:51:47 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:51:47.480512 | orchestrator | 2026-03-13 00:51:47 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:51:50.519536 | orchestrator | 2026-03-13 00:51:50 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:51:50.520697 | orchestrator | 2026-03-13 00:51:50 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:51:50.520740 | orchestrator | 2026-03-13 00:51:50 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:51:53.555857 | orchestrator | 2026-03-13 00:51:53 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:51:53.556806 | orchestrator | 2026-03-13 00:51:53 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:51:53.556855 | orchestrator | 2026-03-13 00:51:53 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:51:56.597869 | orchestrator | 2026-03-13 00:51:56 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:51:56.597930 | orchestrator | 2026-03-13 00:51:56 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:51:56.597938 | orchestrator | 2026-03-13 00:51:56 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:51:59.629349 | orchestrator | 2026-03-13 00:51:59 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:51:59.632360 | orchestrator | 2026-03-13 00:51:59 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:51:59.632430 | orchestrator | 2026-03-13 00:51:59 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:52:02.667589 | orchestrator | 2026-03-13 00:52:02 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:52:02.670373 | orchestrator | 2026-03-13 00:52:02 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:52:02.670457 | orchestrator | 2026-03-13 00:52:02 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:52:05.720212 | orchestrator | 2026-03-13 00:52:05 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:52:05.723578 | orchestrator | 2026-03-13 00:52:05 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:52:05.723658 | orchestrator | 2026-03-13 00:52:05 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:52:08.763975 | orchestrator | 2026-03-13 00:52:08 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:52:08.764932 | orchestrator | 2026-03-13 00:52:08 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:52:08.764981 | orchestrator | 2026-03-13 00:52:08 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:52:11.804353 | orchestrator | 2026-03-13 00:52:11 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:52:11.806349 | orchestrator | 2026-03-13 00:52:11 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:52:11.806388 | orchestrator | 2026-03-13 00:52:11 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:52:14.855972 | orchestrator | 2026-03-13 00:52:14 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:52:14.857861 | orchestrator | 2026-03-13 00:52:14 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:52:14.858175 | orchestrator | 2026-03-13 00:52:14 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:52:17.904758 | orchestrator | 2026-03-13 00:52:17 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:52:17.905808 | orchestrator | 2026-03-13 00:52:17 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:52:17.905845 | orchestrator | 2026-03-13 00:52:17 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:52:20.946074 | orchestrator | 2026-03-13 00:52:20 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:52:20.948555 | orchestrator | 2026-03-13 00:52:20 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:52:20.948613 | orchestrator | 2026-03-13 00:52:20 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:52:23.995106 | orchestrator | 2026-03-13 00:52:23 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:52:23.995808 | orchestrator | 2026-03-13 00:52:23 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:52:23.996737 | orchestrator | 2026-03-13 00:52:23 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:52:27.055922 | orchestrator | 2026-03-13 00:52:27 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:52:27.056458 | orchestrator | 2026-03-13 00:52:27 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:52:27.056545 | orchestrator | 2026-03-13 00:52:27 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:52:30.093355 | orchestrator | 2026-03-13 00:52:30 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:52:30.097177 | orchestrator | 2026-03-13 00:52:30 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:52:30.097284 | orchestrator | 2026-03-13 00:52:30 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:52:33.137095 | orchestrator | 2026-03-13 00:52:33 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:52:33.138359 | orchestrator | 2026-03-13 00:52:33 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:52:33.138397 | orchestrator | 2026-03-13 00:52:33 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:52:36.164999 | orchestrator | 2026-03-13 00:52:36 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:52:36.166713 | orchestrator | 2026-03-13 00:52:36 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:52:36.166758 | orchestrator | 2026-03-13 00:52:36 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:52:39.203454 | orchestrator | 2026-03-13 00:52:39 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:52:39.205306 | orchestrator | 2026-03-13 00:52:39 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:52:39.205363 | orchestrator | 2026-03-13 00:52:39 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:52:42.230111 | orchestrator | 2026-03-13 00:52:42 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:52:42.231919 | orchestrator | 2026-03-13 00:52:42 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:52:42.231970 | orchestrator | 2026-03-13 00:52:42 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:52:45.261200 | orchestrator | 2026-03-13 00:52:45 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:52:45.261630 | orchestrator | 2026-03-13 00:52:45 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:52:45.261656 | orchestrator | 2026-03-13 00:52:45 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:52:48.291473 | orchestrator | 2026-03-13 00:52:48 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:52:48.292455 | orchestrator | 2026-03-13 00:52:48 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:52:48.292489 | orchestrator | 2026-03-13 00:52:48 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:52:51.338360 | orchestrator | 2026-03-13 00:52:51 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:52:51.338416 | orchestrator | 2026-03-13 00:52:51 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:52:51.338525 | orchestrator | 2026-03-13 00:52:51 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:52:54.382501 | orchestrator | 2026-03-13 00:52:54 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:52:54.384107 | orchestrator | 2026-03-13 00:52:54 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:52:54.384158 | orchestrator | 2026-03-13 00:52:54 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:52:57.417464 | orchestrator | 2026-03-13 00:52:57 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:52:57.418121 | orchestrator | 2026-03-13 00:52:57 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:52:57.418163 | orchestrator | 2026-03-13 00:52:57 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:53:00.461378 | orchestrator | 2026-03-13 00:53:00 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:53:00.462230 | orchestrator | 2026-03-13 00:53:00 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:53:00.462361 | orchestrator | 2026-03-13 00:53:00 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:53:03.499063 | orchestrator | 2026-03-13 00:53:03 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:53:03.499134 | orchestrator | 2026-03-13 00:53:03 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:53:03.499141 | orchestrator | 2026-03-13 00:53:03 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:53:06.536112 | orchestrator | 2026-03-13 00:53:06 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:53:06.537770 | orchestrator | 2026-03-13 00:53:06 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:53:06.538772 | orchestrator | 2026-03-13 00:53:06 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:53:09.575227 | orchestrator | 2026-03-13 00:53:09 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:53:09.576697 | orchestrator | 2026-03-13 00:53:09 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:53:09.576772 | orchestrator | 2026-03-13 00:53:09 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:53:12.626324 | orchestrator | 2026-03-13 00:53:12 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:53:12.630268 | orchestrator | 2026-03-13 00:53:12 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:53:12.630332 | orchestrator | 2026-03-13 00:53:12 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:53:15.668425 | orchestrator | 2026-03-13 00:53:15 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:53:15.670184 | orchestrator | 2026-03-13 00:53:15 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:53:15.670275 | orchestrator | 2026-03-13 00:53:15 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:53:18.721303 | orchestrator | 2026-03-13 00:53:18 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:53:18.724165 | orchestrator | 2026-03-13 00:53:18 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:53:18.724263 | orchestrator | 2026-03-13 00:53:18 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:53:21.768995 | orchestrator | 2026-03-13 00:53:21 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:53:21.769105 | orchestrator | 2026-03-13 00:53:21 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:53:21.769121 | orchestrator | 2026-03-13 00:53:21 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:53:24.810724 | orchestrator | 2026-03-13 00:53:24 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:53:24.812900 | orchestrator | 2026-03-13 00:53:24 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:53:24.812975 | orchestrator | 2026-03-13 00:53:24 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:53:27.857708 | orchestrator | 2026-03-13 00:53:27 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:53:27.857790 | orchestrator | 2026-03-13 00:53:27 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:53:27.857800 | orchestrator | 2026-03-13 00:53:27 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:53:30.898219 | orchestrator | 2026-03-13 00:53:30 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:53:30.900888 | orchestrator | 2026-03-13 00:53:30 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:53:30.900972 | orchestrator | 2026-03-13 00:53:30 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:53:33.937738 | orchestrator | 2026-03-13 00:53:33 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:53:33.940140 | orchestrator | 2026-03-13 00:53:33 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:53:33.940229 | orchestrator | 2026-03-13 00:53:33 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:53:36.982479 | orchestrator | 2026-03-13 00:53:36 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:53:36.985004 | orchestrator | 2026-03-13 00:53:36 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:53:36.985100 | orchestrator | 2026-03-13 00:53:36 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:53:40.019318 | orchestrator | 2026-03-13 00:53:40 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:53:40.019829 | orchestrator | 2026-03-13 00:53:40 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:53:40.019886 | orchestrator | 2026-03-13 00:53:40 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:53:43.065451 | orchestrator | 2026-03-13 00:53:43 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:53:43.067128 | orchestrator | 2026-03-13 00:53:43 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:53:43.067343 | orchestrator | 2026-03-13 00:53:43 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:53:46.104358 | orchestrator | 2026-03-13 00:53:46 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:53:46.105859 | orchestrator | 2026-03-13 00:53:46 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:53:46.105915 | orchestrator | 2026-03-13 00:53:46 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:53:49.141856 | orchestrator | 2026-03-13 00:53:49 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:53:49.143081 | orchestrator | 2026-03-13 00:53:49 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:53:49.143341 | orchestrator | 2026-03-13 00:53:49 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:53:52.176783 | orchestrator | 2026-03-13 00:53:52 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:53:52.178208 | orchestrator | 2026-03-13 00:53:52 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:53:52.178246 | orchestrator | 2026-03-13 00:53:52 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:53:55.219747 | orchestrator | 2026-03-13 00:53:55 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:53:55.222589 | orchestrator | 2026-03-13 00:53:55 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:53:55.222641 | orchestrator | 2026-03-13 00:53:55 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:53:58.262177 | orchestrator | 2026-03-13 00:53:58 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:53:58.264312 | orchestrator | 2026-03-13 00:53:58 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:53:58.264464 | orchestrator | 2026-03-13 00:53:58 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:54:01.300519 | orchestrator | 2026-03-13 00:54:01 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:54:01.301574 | orchestrator | 2026-03-13 00:54:01 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:54:01.301710 | orchestrator | 2026-03-13 00:54:01 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:54:04.346428 | orchestrator | 2026-03-13 00:54:04 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:54:04.347816 | orchestrator | 2026-03-13 00:54:04 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:54:04.347855 | orchestrator | 2026-03-13 00:54:04 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:54:07.394330 | orchestrator | 2026-03-13 00:54:07 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:54:07.397092 | orchestrator | 2026-03-13 00:54:07 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:54:07.397175 | orchestrator | 2026-03-13 00:54:07 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:54:10.437054 | orchestrator | 2026-03-13 00:54:10 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:54:10.437990 | orchestrator | 2026-03-13 00:54:10 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:54:10.438652 | orchestrator | 2026-03-13 00:54:10 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:54:13.477190 | orchestrator | 2026-03-13 00:54:13 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:54:13.479609 | orchestrator | 2026-03-13 00:54:13 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:54:13.481306 | orchestrator | 2026-03-13 00:54:13 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:54:16.517674 | orchestrator | 2026-03-13 00:54:16 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:54:16.519748 | orchestrator | 2026-03-13 00:54:16 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:54:16.519813 | orchestrator | 2026-03-13 00:54:16 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:54:19.563234 | orchestrator | 2026-03-13 00:54:19 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:54:19.564831 | orchestrator | 2026-03-13 00:54:19 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state STARTED 2026-03-13 00:54:19.567944 | orchestrator | 2026-03-13 00:54:19 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:54:22.600612 | orchestrator | 2026-03-13 00:54:22 | INFO  | Task a4bc5003-8dc1-4589-b3aa-f7d50e3a689e is in state STARTED 2026-03-13 00:54:22.601462 | orchestrator | 2026-03-13 00:54:22 | INFO  | Task 547da46c-916c-4b6d-90e8-fc067c5a6077 is in state STARTED 2026-03-13 00:54:22.602453 | orchestrator | 2026-03-13 00:54:22 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:54:22.607269 | orchestrator | 2026-03-13 00:54:22 | INFO  | Task 344d19c9-e3fc-4e7b-ac7c-e3e1d7f7426b is in state SUCCESS 2026-03-13 00:54:22.607472 | orchestrator | 2026-03-13 00:54:22.608886 | orchestrator | 2026-03-13 00:54:22.609064 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-13 00:54:22.609087 | orchestrator | 2026-03-13 00:54:22.609099 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-13 00:54:22.610282 | orchestrator | Friday 13 March 2026 00:48:11 +0000 (0:00:00.264) 0:00:00.264 ********** 2026-03-13 00:54:22.610367 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:54:22.610380 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:54:22.610387 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:54:22.610395 | orchestrator | 2026-03-13 00:54:22.610404 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-13 00:54:22.610411 | orchestrator | Friday 13 March 2026 00:48:12 +0000 (0:00:00.277) 0:00:00.541 ********** 2026-03-13 00:54:22.610419 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-03-13 00:54:22.610426 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-03-13 00:54:22.610433 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-03-13 00:54:22.610440 | orchestrator | 2026-03-13 00:54:22.610446 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-03-13 00:54:22.610453 | orchestrator | 2026-03-13 00:54:22.610460 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-13 00:54:22.610467 | orchestrator | Friday 13 March 2026 00:48:12 +0000 (0:00:00.472) 0:00:01.013 ********** 2026-03-13 00:54:22.610475 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:54:22.610482 | orchestrator | 2026-03-13 00:54:22.610489 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-03-13 00:54:22.610496 | orchestrator | Friday 13 March 2026 00:48:13 +0000 (0:00:00.641) 0:00:01.655 ********** 2026-03-13 00:54:22.610502 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:54:22.610507 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:54:22.610513 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:54:22.610518 | orchestrator | 2026-03-13 00:54:22.610523 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-13 00:54:22.610529 | orchestrator | Friday 13 March 2026 00:48:14 +0000 (0:00:00.725) 0:00:02.381 ********** 2026-03-13 00:54:22.610534 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:54:22.610540 | orchestrator | 2026-03-13 00:54:22.610546 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-03-13 00:54:22.610551 | orchestrator | Friday 13 March 2026 00:48:15 +0000 (0:00:01.019) 0:00:03.401 ********** 2026-03-13 00:54:22.610558 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:54:22.610565 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:54:22.610572 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:54:22.610579 | orchestrator | 2026-03-13 00:54:22.610586 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-03-13 00:54:22.610593 | orchestrator | Friday 13 March 2026 00:48:16 +0000 (0:00:00.927) 0:00:04.328 ********** 2026-03-13 00:54:22.610600 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-13 00:54:22.610607 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-13 00:54:22.610614 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-13 00:54:22.610621 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-13 00:54:22.610629 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-13 00:54:22.610636 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-13 00:54:22.610643 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-13 00:54:22.610650 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-13 00:54:22.610658 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-13 00:54:22.610665 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-13 00:54:22.610678 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-13 00:54:22.610685 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-13 00:54:22.610692 | orchestrator | 2026-03-13 00:54:22.610699 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-13 00:54:22.610705 | orchestrator | Friday 13 March 2026 00:48:19 +0000 (0:00:03.676) 0:00:08.005 ********** 2026-03-13 00:54:22.610713 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-13 00:54:22.610721 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-13 00:54:22.610727 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-13 00:54:22.610734 | orchestrator | 2026-03-13 00:54:22.610742 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-13 00:54:22.610760 | orchestrator | Friday 13 March 2026 00:48:20 +0000 (0:00:00.714) 0:00:08.719 ********** 2026-03-13 00:54:22.610768 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-13 00:54:22.610775 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-13 00:54:22.610782 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-13 00:54:22.610790 | orchestrator | 2026-03-13 00:54:22.610797 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-13 00:54:22.610804 | orchestrator | Friday 13 March 2026 00:48:22 +0000 (0:00:01.566) 0:00:10.286 ********** 2026-03-13 00:54:22.610810 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-03-13 00:54:22.610817 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.610877 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-03-13 00:54:22.610886 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.610893 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-03-13 00:54:22.610900 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.610906 | orchestrator | 2026-03-13 00:54:22.610913 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-03-13 00:54:22.610920 | orchestrator | Friday 13 March 2026 00:48:23 +0000 (0:00:01.248) 0:00:11.534 ********** 2026-03-13 00:54:22.610929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-13 00:54:22.610941 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-13 00:54:22.610947 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-13 00:54:22.610959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-13 00:54:22.610965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-13 00:54:22.610983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-13 00:54:22.610991 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-13 00:54:22.610999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-13 00:54:22.611006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-13 00:54:22.611013 | orchestrator | 2026-03-13 00:54:22.611019 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-03-13 00:54:22.611026 | orchestrator | Friday 13 March 2026 00:48:25 +0000 (0:00:02.222) 0:00:13.756 ********** 2026-03-13 00:54:22.611033 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:54:22.611040 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:54:22.611047 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:54:22.611057 | orchestrator | 2026-03-13 00:54:22.611064 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-03-13 00:54:22.611072 | orchestrator | Friday 13 March 2026 00:48:26 +0000 (0:00:01.105) 0:00:14.862 ********** 2026-03-13 00:54:22.611078 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-03-13 00:54:22.611085 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-03-13 00:54:22.611092 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-03-13 00:54:22.611098 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-03-13 00:54:22.611105 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-03-13 00:54:22.611110 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-03-13 00:54:22.611116 | orchestrator | 2026-03-13 00:54:22.611122 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-03-13 00:54:22.611128 | orchestrator | Friday 13 March 2026 00:48:28 +0000 (0:00:02.263) 0:00:17.126 ********** 2026-03-13 00:54:22.611134 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:54:22.611141 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:54:22.611147 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:54:22.611153 | orchestrator | 2026-03-13 00:54:22.611160 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-03-13 00:54:22.611167 | orchestrator | Friday 13 March 2026 00:48:31 +0000 (0:00:02.164) 0:00:19.291 ********** 2026-03-13 00:54:22.611173 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:54:22.611180 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:54:22.611186 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:54:22.611193 | orchestrator | 2026-03-13 00:54:22.611200 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-03-13 00:54:22.611207 | orchestrator | Friday 13 March 2026 00:48:32 +0000 (0:00:01.827) 0:00:21.118 ********** 2026-03-13 00:54:22.611214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-13 00:54:22.611232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-13 00:54:22.611239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-13 00:54:22.611247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__7ac2484bcb02d386bafa59251cce3d166a881de7', '__omit_place_holder__7ac2484bcb02d386bafa59251cce3d166a881de7'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-13 00:54:22.611260 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.611267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-13 00:54:22.611275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-13 00:54:22.611282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-13 00:54:22.611292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-13 00:54:22.611305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__7ac2484bcb02d386bafa59251cce3d166a881de7', '__omit_place_holder__7ac2484bcb02d386bafa59251cce3d166a881de7'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-13 00:54:22.611311 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.611317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-13 00:54:22.611328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-13 00:54:22.611334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__7ac2484bcb02d386bafa59251cce3d166a881de7', '__omit_place_holder__7ac2484bcb02d386bafa59251cce3d166a881de7'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-13 00:54:22.611341 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.611348 | orchestrator | 2026-03-13 00:54:22.611355 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-03-13 00:54:22.611362 | orchestrator | Friday 13 March 2026 00:48:33 +0000 (0:00:00.944) 0:00:22.063 ********** 2026-03-13 00:54:22.611369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-13 00:54:22.611379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-13 00:54:22.611392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-13 00:54:22.611399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-13 00:54:22.611410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-13 00:54:22.611417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__7ac2484bcb02d386bafa59251cce3d166a881de7', '__omit_place_holder__7ac2484bcb02d386bafa59251cce3d166a881de7'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-13 00:54:22.611424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-13 00:54:22.611431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-13 00:54:22.611441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__7ac2484bcb02d386bafa59251cce3d166a881de7', '__omit_place_holder__7ac2484bcb02d386bafa59251cce3d166a881de7'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-13 00:54:22.611453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-13 00:54:22.611468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-13 00:54:22.611476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__7ac2484bcb02d386bafa59251cce3d166a881de7', '__omit_place_holder__7ac2484bcb02d386bafa59251cce3d166a881de7'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-13 00:54:22.611482 | orchestrator | 2026-03-13 00:54:22.611489 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-03-13 00:54:22.611495 | orchestrator | Friday 13 March 2026 00:48:38 +0000 (0:00:04.706) 0:00:26.769 ********** 2026-03-13 00:54:22.611502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-13 00:54:22.611509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-13 00:54:22.611523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-13 00:54:22.611535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-13 00:54:22.611546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-13 00:54:22.611552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-13 00:54:22.611558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-13 00:54:22.611564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-13 00:54:22.611570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-13 00:54:22.611576 | orchestrator | 2026-03-13 00:54:22.611583 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-03-13 00:54:22.611590 | orchestrator | Friday 13 March 2026 00:48:42 +0000 (0:00:03.742) 0:00:30.511 ********** 2026-03-13 00:54:22.611597 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-13 00:54:22.611604 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-13 00:54:22.611611 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-13 00:54:22.611616 | orchestrator | 2026-03-13 00:54:22.611626 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-03-13 00:54:22.611632 | orchestrator | Friday 13 March 2026 00:48:44 +0000 (0:00:02.135) 0:00:32.647 ********** 2026-03-13 00:54:22.611643 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-13 00:54:22.611649 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-13 00:54:22.611656 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-13 00:54:22.611662 | orchestrator | 2026-03-13 00:54:22.611672 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-03-13 00:54:22.611679 | orchestrator | Friday 13 March 2026 00:48:48 +0000 (0:00:04.510) 0:00:37.158 ********** 2026-03-13 00:54:22.611685 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.611691 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.611697 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.611703 | orchestrator | 2026-03-13 00:54:22.611709 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-03-13 00:54:22.611715 | orchestrator | Friday 13 March 2026 00:48:49 +0000 (0:00:00.788) 0:00:37.947 ********** 2026-03-13 00:54:22.611721 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-13 00:54:22.611729 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-13 00:54:22.611734 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-13 00:54:22.611740 | orchestrator | 2026-03-13 00:54:22.611746 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-03-13 00:54:22.611752 | orchestrator | Friday 13 March 2026 00:48:53 +0000 (0:00:03.468) 0:00:41.415 ********** 2026-03-13 00:54:22.611757 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-13 00:54:22.611764 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-13 00:54:22.611770 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-13 00:54:22.611776 | orchestrator | 2026-03-13 00:54:22.611782 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-03-13 00:54:22.611788 | orchestrator | Friday 13 March 2026 00:48:57 +0000 (0:00:04.525) 0:00:45.941 ********** 2026-03-13 00:54:22.611794 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-03-13 00:54:22.611800 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-03-13 00:54:22.611806 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-03-13 00:54:22.611813 | orchestrator | 2026-03-13 00:54:22.611818 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-03-13 00:54:22.611824 | orchestrator | Friday 13 March 2026 00:48:59 +0000 (0:00:01.890) 0:00:47.831 ********** 2026-03-13 00:54:22.611829 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-03-13 00:54:22.611836 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-03-13 00:54:22.611841 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-03-13 00:54:22.611875 | orchestrator | 2026-03-13 00:54:22.611882 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-13 00:54:22.611888 | orchestrator | Friday 13 March 2026 00:49:01 +0000 (0:00:01.564) 0:00:49.395 ********** 2026-03-13 00:54:22.611894 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:54:22.611900 | orchestrator | 2026-03-13 00:54:22.611907 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-03-13 00:54:22.611913 | orchestrator | Friday 13 March 2026 00:49:01 +0000 (0:00:00.744) 0:00:50.140 ********** 2026-03-13 00:54:22.611921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-13 00:54:22.611940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-13 00:54:22.611955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-13 00:54:22.611962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-13 00:54:22.611969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-13 00:54:22.611976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-13 00:54:22.611982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-13 00:54:22.611994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-13 00:54:22.612001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-13 00:54:22.612008 | orchestrator | 2026-03-13 00:54:22.612015 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-03-13 00:54:22.612061 | orchestrator | Friday 13 March 2026 00:49:04 +0000 (0:00:03.096) 0:00:53.236 ********** 2026-03-13 00:54:22.612076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-13 00:54:22.612084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-13 00:54:22.612090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-13 00:54:22.612097 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.612103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-13 00:54:22.612115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-13 00:54:22.612121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-13 00:54:22.612127 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.612136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-13 00:54:22.612148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-13 00:54:22.612155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-13 00:54:22.612161 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.612168 | orchestrator | 2026-03-13 00:54:22.612175 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-03-13 00:54:22.612182 | orchestrator | Friday 13 March 2026 00:49:06 +0000 (0:00:01.099) 0:00:54.335 ********** 2026-03-13 00:54:22.612189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-13 00:54:22.612203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-13 00:54:22.612211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-13 00:54:22.612218 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.612229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-13 00:54:22.612241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-13 00:54:22.612249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-13 00:54:22.612256 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.612264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-13 00:54:22.612279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-13 00:54:22.612287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-13 00:54:22.612294 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.612301 | orchestrator | 2026-03-13 00:54:22.612308 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-13 00:54:22.612313 | orchestrator | Friday 13 March 2026 00:49:07 +0000 (0:00:01.675) 0:00:56.010 ********** 2026-03-13 00:54:22.612319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-13 00:54:22.612334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-13 00:54:22.612342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-13 00:54:22.612348 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.612355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-13 00:54:22.612367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-13 00:54:22.612375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-13 00:54:22.612382 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.612390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-13 00:54:22.612404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-13 00:54:22.612416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-13 00:54:22.612424 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.612431 | orchestrator | 2026-03-13 00:54:22.612439 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-13 00:54:22.612446 | orchestrator | Friday 13 March 2026 00:49:09 +0000 (0:00:01.336) 0:00:57.347 ********** 2026-03-13 00:54:22.612453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-13 00:54:22.612465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-13 00:54:22.612472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-13 00:54:22.612479 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.612485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-13 00:54:22.612492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-13 00:54:22.612502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-13 00:54:22.612508 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.612519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-13 00:54:22.612527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-13 00:54:22.612539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-13 00:54:22.612547 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.612553 | orchestrator | 2026-03-13 00:54:22.612561 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-13 00:54:22.612567 | orchestrator | Friday 13 March 2026 00:49:10 +0000 (0:00:01.057) 0:00:58.404 ********** 2026-03-13 00:54:22.612575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-13 00:54:22.612582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-13 00:54:22.612591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-13 00:54:22.612603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-13 00:54:22.612610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-13 00:54:22.612622 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.612630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-13 00:54:22.612637 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.612644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-13 00:54:22.612652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-13 00:54:22.612659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-13 00:54:22.612667 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.612674 | orchestrator | 2026-03-13 00:54:22.612681 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-03-13 00:54:22.612688 | orchestrator | Friday 13 March 2026 00:49:11 +0000 (0:00:01.442) 0:00:59.846 ********** 2026-03-13 00:54:22.612699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-13 00:54:22.613817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-13 00:54:22.613956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-13 00:54:22.613970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-13 00:54:22.613979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-13 00:54:22.613986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-13 00:54:22.613993 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.614001 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.614008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-13 00:54:22.614071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-13 00:54:22.614087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-13 00:54:22.614095 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.614103 | orchestrator | 2026-03-13 00:54:22.614110 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-03-13 00:54:22.614118 | orchestrator | Friday 13 March 2026 00:49:12 +0000 (0:00:00.977) 0:01:00.824 ********** 2026-03-13 00:54:22.614126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-13 00:54:22.614133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-13 00:54:22.614141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-13 00:54:22.614148 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.614156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-13 00:54:22.614167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-13 00:54:22.614186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-13 00:54:22.614194 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.614201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-13 00:54:22.614209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-13 00:54:22.614217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-13 00:54:22.614224 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.614231 | orchestrator | 2026-03-13 00:54:22.614239 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-03-13 00:54:22.614246 | orchestrator | Friday 13 March 2026 00:49:14 +0000 (0:00:01.745) 0:01:02.569 ********** 2026-03-13 00:54:22.614253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-13 00:54:22.614261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-13 00:54:22.614279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-13 00:54:22.614287 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.614299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-13 00:54:22.614307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-13 00:54:22.614313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-13 00:54:22.614320 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.614326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-13 00:54:22.614333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-13 00:54:22.614345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-13 00:54:22.614353 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.614360 | orchestrator | 2026-03-13 00:54:22.614370 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-03-13 00:54:22.614378 | orchestrator | Friday 13 March 2026 00:49:15 +0000 (0:00:01.271) 0:01:03.840 ********** 2026-03-13 00:54:22.614384 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-13 00:54:22.614392 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-13 00:54:22.614404 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-13 00:54:22.614411 | orchestrator | 2026-03-13 00:54:22.614418 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-03-13 00:54:22.614426 | orchestrator | Friday 13 March 2026 00:49:18 +0000 (0:00:02.466) 0:01:06.307 ********** 2026-03-13 00:54:22.614433 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-13 00:54:22.614442 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-13 00:54:22.614451 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-13 00:54:22.614458 | orchestrator | 2026-03-13 00:54:22.614466 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-03-13 00:54:22.614473 | orchestrator | Friday 13 March 2026 00:49:20 +0000 (0:00:02.176) 0:01:08.484 ********** 2026-03-13 00:54:22.614480 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-13 00:54:22.614488 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-13 00:54:22.614495 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-13 00:54:22.614503 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-13 00:54:22.614511 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.614519 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-13 00:54:22.614528 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.614536 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-13 00:54:22.614544 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.614553 | orchestrator | 2026-03-13 00:54:22.614560 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-03-13 00:54:22.614569 | orchestrator | Friday 13 March 2026 00:49:21 +0000 (0:00:00.956) 0:01:09.441 ********** 2026-03-13 00:54:22.614578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-13 00:54:22.614592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-13 00:54:22.614601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-13 00:54:22.614619 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-13 00:54:22.614629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-13 00:54:22.614637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-13 00:54:22.614645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-13 00:54:22.614652 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-13 00:54:22.614664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-13 00:54:22.614672 | orchestrator | 2026-03-13 00:54:22.614681 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-03-13 00:54:22.614689 | orchestrator | Friday 13 March 2026 00:49:23 +0000 (0:00:02.798) 0:01:12.239 ********** 2026-03-13 00:54:22.614698 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:54:22.614705 | orchestrator | 2026-03-13 00:54:22.614714 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-03-13 00:54:22.614722 | orchestrator | Friday 13 March 2026 00:49:24 +0000 (0:00:00.538) 0:01:12.778 ********** 2026-03-13 00:54:22.614735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-13 00:54:22.614751 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-13 00:54:22.614759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-13 00:54:22.614767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-13 00:54:22.614779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.614787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.614798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.614812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.614820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-13 00:54:22.614828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-13 00:54:22.614840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.614897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.614905 | orchestrator | 2026-03-13 00:54:22.614912 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-03-13 00:54:22.614919 | orchestrator | Friday 13 March 2026 00:49:29 +0000 (0:00:04.928) 0:01:17.707 ********** 2026-03-13 00:54:22.614930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-13 00:54:22.614944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-13 00:54:22.614951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.614959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.614973 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.614980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-13 00:54:22.614987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-13 00:54:22.614994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.615005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.615012 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.615023 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-13 00:54:22.615031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-13 00:54:22.615043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.615050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.615057 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.615064 | orchestrator | 2026-03-13 00:54:22.615071 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-03-13 00:54:22.615078 | orchestrator | Friday 13 March 2026 00:49:30 +0000 (0:00:00.857) 0:01:18.564 ********** 2026-03-13 00:54:22.615086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-13 00:54:22.615094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-13 00:54:22.615102 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.615108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-13 00:54:22.615116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-13 00:54:22.615126 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.615133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-13 00:54:22.615139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-13 00:54:22.615146 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.615153 | orchestrator | 2026-03-13 00:54:22.615164 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-03-13 00:54:22.615172 | orchestrator | Friday 13 March 2026 00:49:31 +0000 (0:00:00.775) 0:01:19.340 ********** 2026-03-13 00:54:22.615179 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:54:22.615186 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:54:22.615193 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:54:22.615200 | orchestrator | 2026-03-13 00:54:22.615213 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-03-13 00:54:22.615221 | orchestrator | Friday 13 March 2026 00:49:32 +0000 (0:00:01.218) 0:01:20.558 ********** 2026-03-13 00:54:22.615227 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:54:22.615234 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:54:22.615241 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:54:22.615248 | orchestrator | 2026-03-13 00:54:22.615255 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-03-13 00:54:22.615262 | orchestrator | Friday 13 March 2026 00:49:34 +0000 (0:00:01.787) 0:01:22.346 ********** 2026-03-13 00:54:22.615269 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:54:22.615276 | orchestrator | 2026-03-13 00:54:22.615283 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-03-13 00:54:22.615290 | orchestrator | Friday 13 March 2026 00:49:34 +0000 (0:00:00.753) 0:01:23.099 ********** 2026-03-13 00:54:22.615298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-13 00:54:22.615306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.615312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.615319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-13 00:54:22.615335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.615342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.615350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-13 00:54:22.615376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.615385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.615392 | orchestrator | 2026-03-13 00:54:22.615400 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-03-13 00:54:22.615407 | orchestrator | Friday 13 March 2026 00:49:38 +0000 (0:00:03.266) 0:01:26.365 ********** 2026-03-13 00:54:22.615422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-13 00:54:22.615435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.615442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-13 00:54:22.615450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.615456 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.615464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.615474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.615486 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.615497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-13 00:54:22.615504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.615512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.615519 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.615526 | orchestrator | 2026-03-13 00:54:22.615533 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-03-13 00:54:22.615541 | orchestrator | Friday 13 March 2026 00:49:38 +0000 (0:00:00.612) 0:01:26.978 ********** 2026-03-13 00:54:22.615548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-13 00:54:22.615556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-13 00:54:22.615563 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.615570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-13 00:54:22.615577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-13 00:54:22.615584 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.615595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-13 00:54:22.615602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-13 00:54:22.615609 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.615616 | orchestrator | 2026-03-13 00:54:22.615623 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-03-13 00:54:22.615634 | orchestrator | Friday 13 March 2026 00:49:39 +0000 (0:00:01.007) 0:01:27.986 ********** 2026-03-13 00:54:22.615641 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:54:22.615648 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:54:22.615654 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:54:22.615661 | orchestrator | 2026-03-13 00:54:22.615668 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-03-13 00:54:22.615675 | orchestrator | Friday 13 March 2026 00:49:40 +0000 (0:00:01.252) 0:01:29.239 ********** 2026-03-13 00:54:22.615682 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:54:22.615689 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:54:22.615696 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:54:22.615703 | orchestrator | 2026-03-13 00:54:22.615713 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-03-13 00:54:22.615721 | orchestrator | Friday 13 March 2026 00:49:43 +0000 (0:00:02.105) 0:01:31.344 ********** 2026-03-13 00:54:22.615728 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.615734 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.615741 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.615748 | orchestrator | 2026-03-13 00:54:22.615755 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-03-13 00:54:22.615762 | orchestrator | Friday 13 March 2026 00:49:43 +0000 (0:00:00.320) 0:01:31.665 ********** 2026-03-13 00:54:22.615769 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:54:22.615776 | orchestrator | 2026-03-13 00:54:22.615783 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-03-13 00:54:22.615789 | orchestrator | Friday 13 March 2026 00:49:44 +0000 (0:00:00.859) 0:01:32.524 ********** 2026-03-13 00:54:22.615797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-03-13 00:54:22.615805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-03-13 00:54:22.615817 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-03-13 00:54:22.615824 | orchestrator | 2026-03-13 00:54:22.615831 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-03-13 00:54:22.615838 | orchestrator | Friday 13 March 2026 00:49:47 +0000 (0:00:02.828) 0:01:35.353 ********** 2026-03-13 00:54:22.615873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-03-13 00:54:22.615881 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.615888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-03-13 00:54:22.615896 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.615903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-03-13 00:54:22.615910 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.615922 | orchestrator | 2026-03-13 00:54:22.615929 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-03-13 00:54:22.615936 | orchestrator | Friday 13 March 2026 00:49:48 +0000 (0:00:01.782) 0:01:37.136 ********** 2026-03-13 00:54:22.615944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-13 00:54:22.615952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-13 00:54:22.615960 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.615967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-13 00:54:22.615977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-13 00:54:22.615984 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.618338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-13 00:54:22.618394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-13 00:54:22.618404 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.618411 | orchestrator | 2026-03-13 00:54:22.618418 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-03-13 00:54:22.618425 | orchestrator | Friday 13 March 2026 00:49:51 +0000 (0:00:02.293) 0:01:39.430 ********** 2026-03-13 00:54:22.618432 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.618438 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.618443 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.618449 | orchestrator | 2026-03-13 00:54:22.618455 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-03-13 00:54:22.618461 | orchestrator | Friday 13 March 2026 00:49:51 +0000 (0:00:00.566) 0:01:39.996 ********** 2026-03-13 00:54:22.618467 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.618474 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.618481 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.618487 | orchestrator | 2026-03-13 00:54:22.618504 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-03-13 00:54:22.618510 | orchestrator | Friday 13 March 2026 00:49:53 +0000 (0:00:01.327) 0:01:41.324 ********** 2026-03-13 00:54:22.618517 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:54:22.618524 | orchestrator | 2026-03-13 00:54:22.618531 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-03-13 00:54:22.618539 | orchestrator | Friday 13 March 2026 00:49:54 +0000 (0:00:00.978) 0:01:42.303 ********** 2026-03-13 00:54:22.618546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-13 00:54:22.618555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.618568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.618588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.618596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-13 00:54:22.618607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.618614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.618620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.618635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-13 00:54:22.618642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.618649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.618659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.618667 | orchestrator | 2026-03-13 00:54:22.618673 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-03-13 00:54:22.618680 | orchestrator | Friday 13 March 2026 00:49:59 +0000 (0:00:05.477) 0:01:47.780 ********** 2026-03-13 00:54:22.618688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-13 00:54:22.618700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.618713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.618720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.618731 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.618738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-13 00:54:22.618744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.618751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.618761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.618768 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.618778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-13 00:54:22.618789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.618796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.618803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.618810 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.618817 | orchestrator | 2026-03-13 00:54:22.618824 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-03-13 00:54:22.618831 | orchestrator | Friday 13 March 2026 00:50:00 +0000 (0:00:00.920) 0:01:48.700 ********** 2026-03-13 00:54:22.618838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-13 00:54:22.618870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-13 00:54:22.618877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-13 00:54:22.618883 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.618891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-13 00:54:22.618897 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.618903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-13 00:54:22.618920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-13 00:54:22.618927 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.618933 | orchestrator | 2026-03-13 00:54:22.618939 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-03-13 00:54:22.618945 | orchestrator | Friday 13 March 2026 00:50:01 +0000 (0:00:00.817) 0:01:49.518 ********** 2026-03-13 00:54:22.618951 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:54:22.618957 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:54:22.618963 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:54:22.618969 | orchestrator | 2026-03-13 00:54:22.618975 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-03-13 00:54:22.618982 | orchestrator | Friday 13 March 2026 00:50:02 +0000 (0:00:01.188) 0:01:50.707 ********** 2026-03-13 00:54:22.618988 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:54:22.618994 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:54:22.619000 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:54:22.619007 | orchestrator | 2026-03-13 00:54:22.619014 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-03-13 00:54:22.619019 | orchestrator | Friday 13 March 2026 00:50:04 +0000 (0:00:01.764) 0:01:52.472 ********** 2026-03-13 00:54:22.619025 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.619031 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.619037 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.619044 | orchestrator | 2026-03-13 00:54:22.619049 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-03-13 00:54:22.619055 | orchestrator | Friday 13 March 2026 00:50:04 +0000 (0:00:00.412) 0:01:52.884 ********** 2026-03-13 00:54:22.619061 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.619068 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.619074 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.619080 | orchestrator | 2026-03-13 00:54:22.619086 | orchestrator | TASK [include_role : designate] ************************************************ 2026-03-13 00:54:22.619091 | orchestrator | Friday 13 March 2026 00:50:04 +0000 (0:00:00.278) 0:01:53.163 ********** 2026-03-13 00:54:22.619103 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:54:22.619112 | orchestrator | 2026-03-13 00:54:22.619117 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-03-13 00:54:22.619124 | orchestrator | Friday 13 March 2026 00:50:05 +0000 (0:00:00.879) 0:01:54.042 ********** 2026-03-13 00:54:22.619131 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-13 00:54:22.619140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-13 00:54:22.619156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.619168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-13 00:54:22.619176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.619183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-13 00:54:22.619190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.619197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.619213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.619223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.619230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.619237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.619244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.619251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.619257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-13 00:54:22.619270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-13 00:54:22.619281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.619288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.619295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.619302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.619308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.619318 | orchestrator | 2026-03-13 00:54:22.619324 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-03-13 00:54:22.619330 | orchestrator | Friday 13 March 2026 00:50:10 +0000 (0:00:04.934) 0:01:58.977 ********** 2026-03-13 00:54:22.619339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-13 00:54:22.619350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-13 00:54:22.619357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.619364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.619371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.619382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.619390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.619397 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.619410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-13 00:54:22.619418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-13 00:54:22.619425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.619432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.619441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.619448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.619457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.619465 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.619476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-13 00:54:22.619484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-13 00:54:22.619491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.619502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.619510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.619519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.619530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.619537 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.619543 | orchestrator | 2026-03-13 00:54:22.619550 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-03-13 00:54:22.619557 | orchestrator | Friday 13 March 2026 00:50:12 +0000 (0:00:01.441) 0:02:00.418 ********** 2026-03-13 00:54:22.619565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-13 00:54:22.619572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-13 00:54:22.619580 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.619585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-13 00:54:22.619591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-13 00:54:22.619597 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.619604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-13 00:54:22.619617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-13 00:54:22.619624 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.619631 | orchestrator | 2026-03-13 00:54:22.619638 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-03-13 00:54:22.619645 | orchestrator | Friday 13 March 2026 00:50:13 +0000 (0:00:01.821) 0:02:02.240 ********** 2026-03-13 00:54:22.619651 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:54:22.619659 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:54:22.619665 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:54:22.619672 | orchestrator | 2026-03-13 00:54:22.619679 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-03-13 00:54:22.619686 | orchestrator | Friday 13 March 2026 00:50:16 +0000 (0:00:02.192) 0:02:04.432 ********** 2026-03-13 00:54:22.619692 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:54:22.619699 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:54:22.619706 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:54:22.619713 | orchestrator | 2026-03-13 00:54:22.619720 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-03-13 00:54:22.619727 | orchestrator | Friday 13 March 2026 00:50:17 +0000 (0:00:01.815) 0:02:06.247 ********** 2026-03-13 00:54:22.619733 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.619740 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.619747 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.619754 | orchestrator | 2026-03-13 00:54:22.619761 | orchestrator | TASK [include_role : glance] *************************************************** 2026-03-13 00:54:22.619767 | orchestrator | Friday 13 March 2026 00:50:18 +0000 (0:00:00.587) 0:02:06.835 ********** 2026-03-13 00:54:22.619774 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:54:22.619781 | orchestrator | 2026-03-13 00:54:22.619787 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-03-13 00:54:22.619794 | orchestrator | Friday 13 March 2026 00:50:19 +0000 (0:00:00.991) 0:02:07.826 ********** 2026-03-13 00:54:22.619810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-13 00:54:22.619824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-13 00:54:22.619836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-13 00:54:22.619868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-13 00:54:22.619884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-13 00:54:22.619896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-13 00:54:22.619907 | orchestrator | 2026-03-13 00:54:22.619914 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-03-13 00:54:22.619920 | orchestrator | Friday 13 March 2026 00:50:24 +0000 (0:00:05.444) 0:02:13.271 ********** 2026-03-13 00:54:22.619928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-13 00:54:22.620100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-13 00:54:22.620120 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.620146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-13 00:54:22.620191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-13 00:54:22.620206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-13 00:54:22.620217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-13 00:54:22.620252 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.620265 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.620272 | orchestrator | 2026-03-13 00:54:22.620279 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-03-13 00:54:22.620286 | orchestrator | Friday 13 March 2026 00:50:28 +0000 (0:00:03.834) 0:02:17.105 ********** 2026-03-13 00:54:22.620293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-13 00:54:22.620301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-13 00:54:22.620308 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.620315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-13 00:54:22.620321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-13 00:54:22.620328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-13 00:54:22.620334 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.620340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-13 00:54:22.620346 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.620353 | orchestrator | 2026-03-13 00:54:22.620359 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-03-13 00:54:22.620366 | orchestrator | Friday 13 March 2026 00:50:32 +0000 (0:00:03.656) 0:02:20.762 ********** 2026-03-13 00:54:22.620381 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:54:22.620388 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:54:22.620395 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:54:22.620402 | orchestrator | 2026-03-13 00:54:22.620409 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-03-13 00:54:22.620416 | orchestrator | Friday 13 March 2026 00:50:33 +0000 (0:00:01.347) 0:02:22.110 ********** 2026-03-13 00:54:22.620424 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:54:22.620431 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:54:22.620438 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:54:22.620445 | orchestrator | 2026-03-13 00:54:22.620452 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-03-13 00:54:22.620506 | orchestrator | Friday 13 March 2026 00:50:35 +0000 (0:00:02.027) 0:02:24.137 ********** 2026-03-13 00:54:22.620514 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.620521 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.620527 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.620534 | orchestrator | 2026-03-13 00:54:22.620541 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-03-13 00:54:22.620548 | orchestrator | Friday 13 March 2026 00:50:36 +0000 (0:00:00.395) 0:02:24.533 ********** 2026-03-13 00:54:22.620554 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:54:22.620561 | orchestrator | 2026-03-13 00:54:22.620567 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-03-13 00:54:22.620574 | orchestrator | Friday 13 March 2026 00:50:37 +0000 (0:00:00.797) 0:02:25.330 ********** 2026-03-13 00:54:22.620581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-13 00:54:22.620589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-13 00:54:22.620596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-13 00:54:22.620604 | orchestrator | 2026-03-13 00:54:22.620611 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-03-13 00:54:22.620618 | orchestrator | Friday 13 March 2026 00:50:40 +0000 (0:00:03.051) 0:02:28.381 ********** 2026-03-13 00:54:22.620625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-13 00:54:22.620681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-13 00:54:22.620692 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.620698 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.620704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-13 00:54:22.620711 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.620717 | orchestrator | 2026-03-13 00:54:22.620724 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-03-13 00:54:22.620730 | orchestrator | Friday 13 March 2026 00:50:40 +0000 (0:00:00.537) 0:02:28.919 ********** 2026-03-13 00:54:22.620738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-13 00:54:22.620746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-13 00:54:22.620753 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.620760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-13 00:54:22.620766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-13 00:54:22.620773 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.620780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-13 00:54:22.620786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-13 00:54:22.620793 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.620806 | orchestrator | 2026-03-13 00:54:22.620813 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-03-13 00:54:22.620820 | orchestrator | Friday 13 March 2026 00:50:41 +0000 (0:00:00.566) 0:02:29.485 ********** 2026-03-13 00:54:22.620827 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:54:22.620834 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:54:22.620840 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:54:22.620864 | orchestrator | 2026-03-13 00:54:22.620872 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-03-13 00:54:22.620879 | orchestrator | Friday 13 March 2026 00:50:42 +0000 (0:00:01.211) 0:02:30.697 ********** 2026-03-13 00:54:22.620886 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:54:22.620892 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:54:22.620899 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:54:22.620905 | orchestrator | 2026-03-13 00:54:22.620912 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-03-13 00:54:22.620919 | orchestrator | Friday 13 March 2026 00:50:44 +0000 (0:00:01.852) 0:02:32.549 ********** 2026-03-13 00:54:22.620925 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.620932 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.620939 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.620945 | orchestrator | 2026-03-13 00:54:22.620952 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-03-13 00:54:22.620959 | orchestrator | Friday 13 March 2026 00:50:44 +0000 (0:00:00.422) 0:02:32.972 ********** 2026-03-13 00:54:22.620965 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:54:22.620972 | orchestrator | 2026-03-13 00:54:22.620979 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-03-13 00:54:22.620986 | orchestrator | Friday 13 March 2026 00:50:45 +0000 (0:00:00.833) 0:02:33.805 ********** 2026-03-13 00:54:22.621055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-13 00:54:22.621078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-13 00:54:22.621117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-13 00:54:22.621131 | orchestrator | 2026-03-13 00:54:22.621138 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-03-13 00:54:22.621145 | orchestrator | Friday 13 March 2026 00:50:48 +0000 (0:00:02.926) 0:02:36.732 ********** 2026-03-13 00:54:22.621199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-13 00:54:22.621210 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.621217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-13 00:54:22.621230 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.621285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-13 00:54:22.621296 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.621303 | orchestrator | 2026-03-13 00:54:22.621310 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-03-13 00:54:22.621316 | orchestrator | Friday 13 March 2026 00:50:49 +0000 (0:00:00.885) 0:02:37.617 ********** 2026-03-13 00:54:22.621323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-13 00:54:22.621331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-13 00:54:22.621343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-13 00:54:22.621351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-13 00:54:22.621359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-13 00:54:22.621366 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.621373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-13 00:54:22.621380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-13 00:54:22.621387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-13 00:54:22.621394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-13 00:54:22.621401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-13 00:54:22.621411 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.621421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-13 00:54:22.621483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-13 00:54:22.621493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-13 00:54:22.621501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-13 00:54:22.621514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-13 00:54:22.621521 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.621528 | orchestrator | 2026-03-13 00:54:22.621535 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-03-13 00:54:22.621542 | orchestrator | Friday 13 March 2026 00:50:50 +0000 (0:00:00.843) 0:02:38.460 ********** 2026-03-13 00:54:22.621550 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:54:22.621557 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:54:22.621564 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:54:22.621571 | orchestrator | 2026-03-13 00:54:22.621578 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-03-13 00:54:22.621585 | orchestrator | Friday 13 March 2026 00:50:51 +0000 (0:00:01.212) 0:02:39.673 ********** 2026-03-13 00:54:22.621593 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:54:22.621600 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:54:22.621607 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:54:22.621614 | orchestrator | 2026-03-13 00:54:22.621624 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-03-13 00:54:22.621632 | orchestrator | Friday 13 March 2026 00:50:53 +0000 (0:00:01.843) 0:02:41.517 ********** 2026-03-13 00:54:22.621639 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.621647 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.621653 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.621660 | orchestrator | 2026-03-13 00:54:22.621667 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-03-13 00:54:22.621674 | orchestrator | Friday 13 March 2026 00:50:53 +0000 (0:00:00.296) 0:02:41.813 ********** 2026-03-13 00:54:22.621681 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.621688 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.621695 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.621704 | orchestrator | 2026-03-13 00:54:22.621710 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-03-13 00:54:22.621715 | orchestrator | Friday 13 March 2026 00:50:54 +0000 (0:00:00.517) 0:02:42.331 ********** 2026-03-13 00:54:22.621721 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:54:22.621727 | orchestrator | 2026-03-13 00:54:22.621732 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-03-13 00:54:22.621738 | orchestrator | Friday 13 March 2026 00:50:54 +0000 (0:00:00.926) 0:02:43.257 ********** 2026-03-13 00:54:22.621745 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-13 00:54:22.621800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-13 00:54:22.621818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-13 00:54:22.621827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-13 00:54:22.621835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-13 00:54:22.621843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-13 00:54:22.621907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-13 00:54:22.621969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-13 00:54:22.621980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-13 00:54:22.621987 | orchestrator | 2026-03-13 00:54:22.621994 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-03-13 00:54:22.622005 | orchestrator | Friday 13 March 2026 00:50:59 +0000 (0:00:04.198) 0:02:47.456 ********** 2026-03-13 00:54:22.622041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-13 00:54:22.622050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-13 00:54:22.622057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-13 00:54:22.622071 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.622132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-13 00:54:22.622143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-13 00:54:22.622151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-13 00:54:22.622159 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.622166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-13 00:54:22.622174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-13 00:54:22.622192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-13 00:54:22.622200 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.622208 | orchestrator | 2026-03-13 00:54:22.622215 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-03-13 00:54:22.622272 | orchestrator | Friday 13 March 2026 00:50:59 +0000 (0:00:00.726) 0:02:48.182 ********** 2026-03-13 00:54:22.622282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-13 00:54:22.622291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-13 00:54:22.622299 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.622306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-13 00:54:22.622313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-13 00:54:22.622318 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.622324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-13 00:54:22.622330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-13 00:54:22.622337 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.622343 | orchestrator | 2026-03-13 00:54:22.622350 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-03-13 00:54:22.622357 | orchestrator | Friday 13 March 2026 00:51:00 +0000 (0:00:00.893) 0:02:49.076 ********** 2026-03-13 00:54:22.622364 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:54:22.622371 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:54:22.622379 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:54:22.622386 | orchestrator | 2026-03-13 00:54:22.622392 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-03-13 00:54:22.622400 | orchestrator | Friday 13 March 2026 00:51:02 +0000 (0:00:01.241) 0:02:50.317 ********** 2026-03-13 00:54:22.622407 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:54:22.622414 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:54:22.622421 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:54:22.622427 | orchestrator | 2026-03-13 00:54:22.622435 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-03-13 00:54:22.622448 | orchestrator | Friday 13 March 2026 00:51:04 +0000 (0:00:01.972) 0:02:52.290 ********** 2026-03-13 00:54:22.622455 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.622462 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.622469 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.622476 | orchestrator | 2026-03-13 00:54:22.622483 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-03-13 00:54:22.622490 | orchestrator | Friday 13 March 2026 00:51:04 +0000 (0:00:00.437) 0:02:52.728 ********** 2026-03-13 00:54:22.622497 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:54:22.622504 | orchestrator | 2026-03-13 00:54:22.622511 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-03-13 00:54:22.622518 | orchestrator | Friday 13 March 2026 00:51:05 +0000 (0:00:00.967) 0:02:53.695 ********** 2026-03-13 00:54:22.622530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-13 00:54:22.622620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.622633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-13 00:54:22.622640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.622653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-13 00:54:22.622666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.622674 | orchestrator | 2026-03-13 00:54:22.622681 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-03-13 00:54:22.622688 | orchestrator | Friday 13 March 2026 00:51:08 +0000 (0:00:03.147) 0:02:56.843 ********** 2026-03-13 00:54:22.622744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-13 00:54:22.622754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.622761 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.622768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-13 00:54:22.622784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.622792 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.622834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-13 00:54:22.622867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.622875 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.622882 | orchestrator | 2026-03-13 00:54:22.622889 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-03-13 00:54:22.622897 | orchestrator | Friday 13 March 2026 00:51:09 +0000 (0:00:00.828) 0:02:57.671 ********** 2026-03-13 00:54:22.622904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-13 00:54:22.622912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-13 00:54:22.622925 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.622932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-13 00:54:22.622939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-13 00:54:22.622946 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.622953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-13 00:54:22.622960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-13 00:54:22.622971 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.622980 | orchestrator | 2026-03-13 00:54:22.622987 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-03-13 00:54:22.622994 | orchestrator | Friday 13 March 2026 00:51:10 +0000 (0:00:00.836) 0:02:58.508 ********** 2026-03-13 00:54:22.623001 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:54:22.623008 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:54:22.623015 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:54:22.623022 | orchestrator | 2026-03-13 00:54:22.623029 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-03-13 00:54:22.623036 | orchestrator | Friday 13 March 2026 00:51:11 +0000 (0:00:01.407) 0:02:59.916 ********** 2026-03-13 00:54:22.623043 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:54:22.623050 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:54:22.623057 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:54:22.623064 | orchestrator | 2026-03-13 00:54:22.623070 | orchestrator | TASK [include_role : manila] *************************************************** 2026-03-13 00:54:22.623077 | orchestrator | Friday 13 March 2026 00:51:13 +0000 (0:00:02.265) 0:03:02.182 ********** 2026-03-13 00:54:22.623084 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:54:22.623091 | orchestrator | 2026-03-13 00:54:22.623097 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-03-13 00:54:22.623104 | orchestrator | Friday 13 March 2026 00:51:14 +0000 (0:00:01.068) 0:03:03.250 ********** 2026-03-13 00:54:22.623115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-13 00:54:22.623175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.623192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.623200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.623208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-13 00:54:22.623216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.623229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.623282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.623297 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-13 00:54:22.623305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.623312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.623318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.623324 | orchestrator | 2026-03-13 00:54:22.623329 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-03-13 00:54:22.623335 | orchestrator | Friday 13 March 2026 00:51:18 +0000 (0:00:03.165) 0:03:06.416 ********** 2026-03-13 00:54:22.623377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-13 00:54:22.623387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.623399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.623407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.623419 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.623427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-13 00:54:22.623434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.623448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.623515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.623532 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.623540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-13 00:54:22.623547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.623554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.623561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.623568 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.623575 | orchestrator | 2026-03-13 00:54:22.623582 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-03-13 00:54:22.623590 | orchestrator | Friday 13 March 2026 00:51:18 +0000 (0:00:00.627) 0:03:07.043 ********** 2026-03-13 00:54:22.623598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-13 00:54:22.623610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-13 00:54:22.623622 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.623629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-13 00:54:22.623680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-13 00:54:22.623690 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.623697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-13 00:54:22.623704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-13 00:54:22.623711 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.623718 | orchestrator | 2026-03-13 00:54:22.623726 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-03-13 00:54:22.623732 | orchestrator | Friday 13 March 2026 00:51:19 +0000 (0:00:01.022) 0:03:08.066 ********** 2026-03-13 00:54:22.623739 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:54:22.623747 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:54:22.623753 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:54:22.623760 | orchestrator | 2026-03-13 00:54:22.623767 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-03-13 00:54:22.623774 | orchestrator | Friday 13 March 2026 00:51:21 +0000 (0:00:01.325) 0:03:09.391 ********** 2026-03-13 00:54:22.623781 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:54:22.623788 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:54:22.623795 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:54:22.623801 | orchestrator | 2026-03-13 00:54:22.623808 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-03-13 00:54:22.623815 | orchestrator | Friday 13 March 2026 00:51:22 +0000 (0:00:01.754) 0:03:11.146 ********** 2026-03-13 00:54:22.623822 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:54:22.623828 | orchestrator | 2026-03-13 00:54:22.623835 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-03-13 00:54:22.623842 | orchestrator | Friday 13 March 2026 00:51:23 +0000 (0:00:01.130) 0:03:12.277 ********** 2026-03-13 00:54:22.623869 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-13 00:54:22.623877 | orchestrator | 2026-03-13 00:54:22.623883 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-03-13 00:54:22.623890 | orchestrator | Friday 13 March 2026 00:51:27 +0000 (0:00:03.294) 0:03:15.572 ********** 2026-03-13 00:54:22.623913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-13 00:54:22.623981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-13 00:54:22.623991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-13 00:54:22.623998 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.624005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-13 00:54:22.624013 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.624072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-13 00:54:22.624084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-13 00:54:22.624092 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.624099 | orchestrator | 2026-03-13 00:54:22.624106 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-03-13 00:54:22.624113 | orchestrator | Friday 13 March 2026 00:51:29 +0000 (0:00:02.156) 0:03:17.728 ********** 2026-03-13 00:54:22.624120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-13 00:54:22.624134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-13 00:54:22.624142 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.624217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-13 00:54:22.624229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-13 00:54:22.624237 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.624247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-13 00:54:22.624305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-13 00:54:22.624315 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.624321 | orchestrator | 2026-03-13 00:54:22.624327 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-03-13 00:54:22.624334 | orchestrator | Friday 13 March 2026 00:51:31 +0000 (0:00:02.308) 0:03:20.037 ********** 2026-03-13 00:54:22.624340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-13 00:54:22.624347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-13 00:54:22.624353 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.624360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-13 00:54:22.624373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-13 00:54:22.624381 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.624388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-13 00:54:22.624446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-13 00:54:22.624457 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.624464 | orchestrator | 2026-03-13 00:54:22.624471 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-03-13 00:54:22.624478 | orchestrator | Friday 13 March 2026 00:51:34 +0000 (0:00:02.524) 0:03:22.562 ********** 2026-03-13 00:54:22.624486 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:54:22.624493 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:54:22.624500 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:54:22.624507 | orchestrator | 2026-03-13 00:54:22.624514 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-03-13 00:54:22.624520 | orchestrator | Friday 13 March 2026 00:51:36 +0000 (0:00:01.950) 0:03:24.513 ********** 2026-03-13 00:54:22.624528 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.624535 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.624541 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.624548 | orchestrator | 2026-03-13 00:54:22.624555 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-03-13 00:54:22.624562 | orchestrator | Friday 13 March 2026 00:51:37 +0000 (0:00:01.200) 0:03:25.713 ********** 2026-03-13 00:54:22.624570 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.624576 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.624583 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.624591 | orchestrator | 2026-03-13 00:54:22.624598 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-03-13 00:54:22.624605 | orchestrator | Friday 13 March 2026 00:51:37 +0000 (0:00:00.272) 0:03:25.986 ********** 2026-03-13 00:54:22.624612 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:54:22.624620 | orchestrator | 2026-03-13 00:54:22.624632 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-03-13 00:54:22.624640 | orchestrator | Friday 13 March 2026 00:51:38 +0000 (0:00:01.147) 0:03:27.134 ********** 2026-03-13 00:54:22.624648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-13 00:54:22.624656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-13 00:54:22.624663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-13 00:54:22.624671 | orchestrator | 2026-03-13 00:54:22.624685 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-03-13 00:54:22.624693 | orchestrator | Friday 13 March 2026 00:51:40 +0000 (0:00:01.443) 0:03:28.577 ********** 2026-03-13 00:54:22.624766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-13 00:54:22.624778 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.624785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-13 00:54:22.624798 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.624806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-13 00:54:22.624813 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.624820 | orchestrator | 2026-03-13 00:54:22.624826 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-03-13 00:54:22.624834 | orchestrator | Friday 13 March 2026 00:51:40 +0000 (0:00:00.392) 0:03:28.970 ********** 2026-03-13 00:54:22.624841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-13 00:54:22.624865 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.624873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-13 00:54:22.624880 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.624887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-13 00:54:22.624894 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.624902 | orchestrator | 2026-03-13 00:54:22.624908 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-03-13 00:54:22.624915 | orchestrator | Friday 13 March 2026 00:51:41 +0000 (0:00:00.683) 0:03:29.654 ********** 2026-03-13 00:54:22.624922 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.624929 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.624935 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.624942 | orchestrator | 2026-03-13 00:54:22.624949 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-03-13 00:54:22.624956 | orchestrator | Friday 13 March 2026 00:51:41 +0000 (0:00:00.412) 0:03:30.067 ********** 2026-03-13 00:54:22.624967 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.624975 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.624982 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.624988 | orchestrator | 2026-03-13 00:54:22.624996 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-03-13 00:54:22.625003 | orchestrator | Friday 13 March 2026 00:51:42 +0000 (0:00:01.118) 0:03:31.185 ********** 2026-03-13 00:54:22.625010 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.625017 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.625024 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.625030 | orchestrator | 2026-03-13 00:54:22.625038 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-03-13 00:54:22.625087 | orchestrator | Friday 13 March 2026 00:51:43 +0000 (0:00:00.315) 0:03:31.501 ********** 2026-03-13 00:54:22.625095 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:54:22.625102 | orchestrator | 2026-03-13 00:54:22.625109 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-03-13 00:54:22.625116 | orchestrator | Friday 13 March 2026 00:51:44 +0000 (0:00:01.465) 0:03:32.967 ********** 2026-03-13 00:54:22.625124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-13 00:54:22.625131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.625139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.625148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.625194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-13 00:54:22.625209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-13 00:54:22.625217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.625225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.625233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-13 00:54:22.625241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.625256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-13 00:54:22.625294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.625303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.625310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-13 00:54:22.625316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-13 00:54:22.625323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.625363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.625373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-13 00:54:22.625381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-13 00:54:22.625388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-13 00:54:22.625396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-13 00:54:22.625403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.625409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.625458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-13 00:54:22.625468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-13 00:54:22.625476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-13 00:54:22.625483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.625491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-13 00:54:22.625498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-13 00:54:22.625522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.625582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-13 00:54:22.625592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.625600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-13 00:54:22.625607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.625628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-13 00:54:22.625681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.625692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-13 00:54:22.625699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.625707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-13 00:54:22.625714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-13 00:54:22.625727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.625789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-13 00:54:22.625800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.625807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-13 00:54:22.625815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-13 00:54:22.625822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.625834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-13 00:54:22.625939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-13 00:54:22.625951 | orchestrator | 2026-03-13 00:54:22.625959 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-03-13 00:54:22.625966 | orchestrator | Friday 13 March 2026 00:51:48 +0000 (0:00:04.200) 0:03:37.167 ********** 2026-03-13 00:54:22.625974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-13 00:54:22.625981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.625988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.626001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.626096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-13 00:54:22.626109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.626116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-13 00:54:22.626124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-13 00:54:22.626131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-13 00:54:22.626144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.626154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.626221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-13 00:54:22.626232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.626240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.626253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.626261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-13 00:54:22.626272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-13 00:54:22.626322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-13 00:54:22.626331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.626338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.626345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-13 00:54:22.626359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-13 00:54:22.626370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-13 00:54:22.626426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-13 00:54:22.626436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.626443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-13 00:54:22.626450 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.626464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.626471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-13 00:54:22.626479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-13 00:54:22.626490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.626545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-13 00:54:22.626555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-13 00:54:22.626572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-13 00:54:22.626579 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.626587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.626597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.626649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.626658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-13 00:54:22.626673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.626681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-13 00:54:22.626688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-13 00:54:22.626698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.626726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-13 00:54:22.626734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.626741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-13 00:54:22.626754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-13 00:54:22.626761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.626769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-13 00:54:22.626799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-13 00:54:22.626807 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.626814 | orchestrator | 2026-03-13 00:54:22.626821 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-03-13 00:54:22.626828 | orchestrator | Friday 13 March 2026 00:51:50 +0000 (0:00:01.505) 0:03:38.672 ********** 2026-03-13 00:54:22.626836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-13 00:54:22.626844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-13 00:54:22.626872 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.626883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-13 00:54:22.626889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-13 00:54:22.626894 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.626900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-13 00:54:22.626905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-13 00:54:22.626910 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.626917 | orchestrator | 2026-03-13 00:54:22.626923 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-03-13 00:54:22.626930 | orchestrator | Friday 13 March 2026 00:51:52 +0000 (0:00:01.657) 0:03:40.329 ********** 2026-03-13 00:54:22.626937 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:54:22.626943 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:54:22.626950 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:54:22.626957 | orchestrator | 2026-03-13 00:54:22.626964 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-03-13 00:54:22.626970 | orchestrator | Friday 13 March 2026 00:51:53 +0000 (0:00:01.250) 0:03:41.580 ********** 2026-03-13 00:54:22.626977 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:54:22.626983 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:54:22.626990 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:54:22.626996 | orchestrator | 2026-03-13 00:54:22.627003 | orchestrator | TASK [include_role : placement] ************************************************ 2026-03-13 00:54:22.627010 | orchestrator | Friday 13 March 2026 00:51:55 +0000 (0:00:01.905) 0:03:43.485 ********** 2026-03-13 00:54:22.627017 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:54:22.627023 | orchestrator | 2026-03-13 00:54:22.627030 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-03-13 00:54:22.627037 | orchestrator | Friday 13 March 2026 00:51:56 +0000 (0:00:01.140) 0:03:44.626 ********** 2026-03-13 00:54:22.627044 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-13 00:54:22.627081 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-13 00:54:22.627096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-13 00:54:22.627103 | orchestrator | 2026-03-13 00:54:22.627110 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-03-13 00:54:22.627123 | orchestrator | Friday 13 March 2026 00:51:59 +0000 (0:00:03.133) 0:03:47.759 ********** 2026-03-13 00:54:22.627139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-13 00:54:22.627147 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.627154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-13 00:54:22.627161 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.627190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-13 00:54:22.627203 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.627211 | orchestrator | 2026-03-13 00:54:22.627218 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-03-13 00:54:22.627224 | orchestrator | Friday 13 March 2026 00:51:59 +0000 (0:00:00.444) 0:03:48.203 ********** 2026-03-13 00:54:22.627231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-13 00:54:22.627239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-13 00:54:22.627247 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.627254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-13 00:54:22.627262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-13 00:54:22.627269 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.627276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-13 00:54:22.627284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-13 00:54:22.627292 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.627300 | orchestrator | 2026-03-13 00:54:22.627307 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-03-13 00:54:22.627313 | orchestrator | Friday 13 March 2026 00:52:00 +0000 (0:00:00.664) 0:03:48.868 ********** 2026-03-13 00:54:22.627319 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:54:22.627325 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:54:22.627331 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:54:22.627339 | orchestrator | 2026-03-13 00:54:22.627347 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-03-13 00:54:22.627355 | orchestrator | Friday 13 March 2026 00:52:02 +0000 (0:00:01.703) 0:03:50.572 ********** 2026-03-13 00:54:22.627363 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:54:22.627371 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:54:22.627379 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:54:22.627387 | orchestrator | 2026-03-13 00:54:22.627395 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-03-13 00:54:22.627403 | orchestrator | Friday 13 March 2026 00:52:04 +0000 (0:00:01.819) 0:03:52.392 ********** 2026-03-13 00:54:22.627411 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:54:22.627418 | orchestrator | 2026-03-13 00:54:22.627426 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-03-13 00:54:22.627434 | orchestrator | Friday 13 March 2026 00:52:05 +0000 (0:00:01.506) 0:03:53.899 ********** 2026-03-13 00:54:22.627448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-13 00:54:22.627487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.627497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.627507 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-13 00:54:22.627516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.627531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.627565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-13 00:54:22.627576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.627585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.627593 | orchestrator | 2026-03-13 00:54:22.627602 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-03-13 00:54:22.627610 | orchestrator | Friday 13 March 2026 00:52:09 +0000 (0:00:03.695) 0:03:57.594 ********** 2026-03-13 00:54:22.627618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-13 00:54:22.627638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.627666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.627675 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.627682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-13 00:54:22.627690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.627697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.627710 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.627721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-13 00:54:22.627748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.627757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.627764 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.627771 | orchestrator | 2026-03-13 00:54:22.627778 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-03-13 00:54:22.627786 | orchestrator | Friday 13 March 2026 00:52:10 +0000 (0:00:00.962) 0:03:58.556 ********** 2026-03-13 00:54:22.627793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-13 00:54:22.627801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-13 00:54:22.627809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-13 00:54:22.627816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-13 00:54:22.627828 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.627835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-13 00:54:22.627842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-13 00:54:22.627905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-13 00:54:22.627913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-13 00:54:22.627920 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.627927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-13 00:54:22.627934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-13 00:54:22.627964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-13 00:54:22.627972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-13 00:54:22.627979 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.627986 | orchestrator | 2026-03-13 00:54:22.628017 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-03-13 00:54:22.628025 | orchestrator | Friday 13 March 2026 00:52:11 +0000 (0:00:00.798) 0:03:59.355 ********** 2026-03-13 00:54:22.628032 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:54:22.628039 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:54:22.628046 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:54:22.628053 | orchestrator | 2026-03-13 00:54:22.628060 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-03-13 00:54:22.628067 | orchestrator | Friday 13 March 2026 00:52:12 +0000 (0:00:01.410) 0:04:00.765 ********** 2026-03-13 00:54:22.628074 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:54:22.628081 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:54:22.628088 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:54:22.628095 | orchestrator | 2026-03-13 00:54:22.628107 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-03-13 00:54:22.628122 | orchestrator | Friday 13 March 2026 00:52:14 +0000 (0:00:01.940) 0:04:02.706 ********** 2026-03-13 00:54:22.628131 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:54:22.628137 | orchestrator | 2026-03-13 00:54:22.628144 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-03-13 00:54:22.628151 | orchestrator | Friday 13 March 2026 00:52:15 +0000 (0:00:01.409) 0:04:04.115 ********** 2026-03-13 00:54:22.628158 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-03-13 00:54:22.628166 | orchestrator | 2026-03-13 00:54:22.628181 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-03-13 00:54:22.628188 | orchestrator | Friday 13 March 2026 00:52:16 +0000 (0:00:00.822) 0:04:04.938 ********** 2026-03-13 00:54:22.628196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-13 00:54:22.628204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-13 00:54:22.628212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-13 00:54:22.628219 | orchestrator | 2026-03-13 00:54:22.628226 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-03-13 00:54:22.628234 | orchestrator | Friday 13 March 2026 00:52:20 +0000 (0:00:04.323) 0:04:09.261 ********** 2026-03-13 00:54:22.628241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-13 00:54:22.628247 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.628258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-13 00:54:22.628265 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.628295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-13 00:54:22.628303 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.628309 | orchestrator | 2026-03-13 00:54:22.628315 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-03-13 00:54:22.628320 | orchestrator | Friday 13 March 2026 00:52:21 +0000 (0:00:01.014) 0:04:10.276 ********** 2026-03-13 00:54:22.628331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-13 00:54:22.628337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-13 00:54:22.628344 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.628350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-13 00:54:22.628357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-13 00:54:22.628364 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.628370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-13 00:54:22.628377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-13 00:54:22.628384 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.628391 | orchestrator | 2026-03-13 00:54:22.628398 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-13 00:54:22.628405 | orchestrator | Friday 13 March 2026 00:52:23 +0000 (0:00:01.550) 0:04:11.826 ********** 2026-03-13 00:54:22.628411 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:54:22.628418 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:54:22.628425 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:54:22.628432 | orchestrator | 2026-03-13 00:54:22.628439 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-13 00:54:22.628445 | orchestrator | Friday 13 March 2026 00:52:26 +0000 (0:00:02.605) 0:04:14.432 ********** 2026-03-13 00:54:22.628452 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:54:22.628459 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:54:22.628466 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:54:22.628472 | orchestrator | 2026-03-13 00:54:22.628479 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-03-13 00:54:22.628486 | orchestrator | Friday 13 March 2026 00:52:29 +0000 (0:00:02.889) 0:04:17.321 ********** 2026-03-13 00:54:22.628494 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-03-13 00:54:22.628501 | orchestrator | 2026-03-13 00:54:22.628508 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-03-13 00:54:22.628515 | orchestrator | Friday 13 March 2026 00:52:30 +0000 (0:00:01.142) 0:04:18.463 ********** 2026-03-13 00:54:22.628526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-13 00:54:22.628539 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.628567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-13 00:54:22.628574 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.628579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-13 00:54:22.628585 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.628590 | orchestrator | 2026-03-13 00:54:22.628596 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-03-13 00:54:22.628603 | orchestrator | Friday 13 March 2026 00:52:31 +0000 (0:00:01.127) 0:04:19.591 ********** 2026-03-13 00:54:22.628610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-13 00:54:22.628618 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.628625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-13 00:54:22.628632 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.628639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-13 00:54:22.628647 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.628654 | orchestrator | 2026-03-13 00:54:22.628661 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-03-13 00:54:22.628668 | orchestrator | Friday 13 March 2026 00:52:32 +0000 (0:00:01.103) 0:04:20.694 ********** 2026-03-13 00:54:22.628675 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.628682 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.628689 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.628696 | orchestrator | 2026-03-13 00:54:22.628703 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-13 00:54:22.628710 | orchestrator | Friday 13 March 2026 00:52:33 +0000 (0:00:01.528) 0:04:22.223 ********** 2026-03-13 00:54:22.628723 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:54:22.628731 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:54:22.628738 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:54:22.628745 | orchestrator | 2026-03-13 00:54:22.628752 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-13 00:54:22.628759 | orchestrator | Friday 13 March 2026 00:52:36 +0000 (0:00:02.184) 0:04:24.407 ********** 2026-03-13 00:54:22.628766 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:54:22.628772 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:54:22.628777 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:54:22.628783 | orchestrator | 2026-03-13 00:54:22.628792 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-03-13 00:54:22.628798 | orchestrator | Friday 13 March 2026 00:52:38 +0000 (0:00:02.571) 0:04:26.979 ********** 2026-03-13 00:54:22.628803 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-03-13 00:54:22.628808 | orchestrator | 2026-03-13 00:54:22.628814 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-03-13 00:54:22.628819 | orchestrator | Friday 13 March 2026 00:52:39 +0000 (0:00:00.738) 0:04:27.718 ********** 2026-03-13 00:54:22.628862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-13 00:54:22.628871 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.628878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-13 00:54:22.628884 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.628890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-13 00:54:22.628896 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.628901 | orchestrator | 2026-03-13 00:54:22.628907 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-03-13 00:54:22.628913 | orchestrator | Friday 13 March 2026 00:52:40 +0000 (0:00:01.244) 0:04:28.963 ********** 2026-03-13 00:54:22.628920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-13 00:54:22.628932 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.628938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-13 00:54:22.628945 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.628951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-13 00:54:22.628958 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.628964 | orchestrator | 2026-03-13 00:54:22.628975 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-03-13 00:54:22.628981 | orchestrator | Friday 13 March 2026 00:52:41 +0000 (0:00:01.172) 0:04:30.135 ********** 2026-03-13 00:54:22.628987 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.628992 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.628998 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.629003 | orchestrator | 2026-03-13 00:54:22.629009 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-13 00:54:22.629015 | orchestrator | Friday 13 March 2026 00:52:43 +0000 (0:00:01.369) 0:04:31.504 ********** 2026-03-13 00:54:22.629021 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:54:22.629050 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:54:22.629057 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:54:22.629062 | orchestrator | 2026-03-13 00:54:22.629067 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-13 00:54:22.629073 | orchestrator | Friday 13 March 2026 00:52:45 +0000 (0:00:02.126) 0:04:33.631 ********** 2026-03-13 00:54:22.629078 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:54:22.629084 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:54:22.629089 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:54:22.629095 | orchestrator | 2026-03-13 00:54:22.629100 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-03-13 00:54:22.629106 | orchestrator | Friday 13 March 2026 00:52:48 +0000 (0:00:02.840) 0:04:36.471 ********** 2026-03-13 00:54:22.629111 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:54:22.629116 | orchestrator | 2026-03-13 00:54:22.629122 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-03-13 00:54:22.629127 | orchestrator | Friday 13 March 2026 00:52:49 +0000 (0:00:01.367) 0:04:37.839 ********** 2026-03-13 00:54:22.629133 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-13 00:54:22.629149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-13 00:54:22.629155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-13 00:54:22.629161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-13 00:54:22.629171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.629196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-13 00:54:22.629202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-13 00:54:22.629208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-13 00:54:22.629218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-13 00:54:22.629224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.629232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-13 00:54:22.629254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-13 00:54:22.629260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-13 00:54:22.629266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-13 00:54:22.629275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.629281 | orchestrator | 2026-03-13 00:54:22.629287 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-03-13 00:54:22.629292 | orchestrator | Friday 13 March 2026 00:52:52 +0000 (0:00:03.262) 0:04:41.102 ********** 2026-03-13 00:54:22.629299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-13 00:54:22.629308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-13 00:54:22.629332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-13 00:54:22.629339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-13 00:54:22.629349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.629355 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.629362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-13 00:54:22.629367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-13 00:54:22.629376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-13 00:54:22.629398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-13 00:54:22.629404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.629414 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.629421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-13 00:54:22.629427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-13 00:54:22.629433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-13 00:54:22.629439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-13 00:54:22.629464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-13 00:54:22.629470 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.629476 | orchestrator | 2026-03-13 00:54:22.629482 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-03-13 00:54:22.629489 | orchestrator | Friday 13 March 2026 00:52:53 +0000 (0:00:00.653) 0:04:41.756 ********** 2026-03-13 00:54:22.629494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-13 00:54:22.629505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-13 00:54:22.629511 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.629517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-13 00:54:22.629523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-13 00:54:22.629529 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.629534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-13 00:54:22.629540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-13 00:54:22.629546 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.629551 | orchestrator | 2026-03-13 00:54:22.629557 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-03-13 00:54:22.629563 | orchestrator | Friday 13 March 2026 00:52:54 +0000 (0:00:01.261) 0:04:43.017 ********** 2026-03-13 00:54:22.629569 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:54:22.629575 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:54:22.629580 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:54:22.629586 | orchestrator | 2026-03-13 00:54:22.629592 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-03-13 00:54:22.629597 | orchestrator | Friday 13 March 2026 00:52:56 +0000 (0:00:01.428) 0:04:44.446 ********** 2026-03-13 00:54:22.629603 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:54:22.629608 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:54:22.629614 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:54:22.629619 | orchestrator | 2026-03-13 00:54:22.629624 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-03-13 00:54:22.629630 | orchestrator | Friday 13 March 2026 00:52:58 +0000 (0:00:02.220) 0:04:46.666 ********** 2026-03-13 00:54:22.629636 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:54:22.629641 | orchestrator | 2026-03-13 00:54:22.629647 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-03-13 00:54:22.629652 | orchestrator | Friday 13 March 2026 00:53:00 +0000 (0:00:01.646) 0:04:48.312 ********** 2026-03-13 00:54:22.629659 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-13 00:54:22.629687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-13 00:54:22.629698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-13 00:54:22.629706 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-13 00:54:22.629714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-13 00:54:22.629739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-13 00:54:22.629751 | orchestrator | 2026-03-13 00:54:22.629757 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-03-13 00:54:22.629764 | orchestrator | Friday 13 March 2026 00:53:04 +0000 (0:00:04.855) 0:04:53.168 ********** 2026-03-13 00:54:22.629770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-13 00:54:22.629776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-13 00:54:22.629783 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.629789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-13 00:54:22.629800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-13 00:54:22.629828 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.629835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-13 00:54:22.629842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-13 00:54:22.629903 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.629910 | orchestrator | 2026-03-13 00:54:22.629916 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-03-13 00:54:22.629922 | orchestrator | Friday 13 March 2026 00:53:05 +0000 (0:00:00.576) 0:04:53.745 ********** 2026-03-13 00:54:22.629928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-13 00:54:22.629935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-13 00:54:22.629941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-13 00:54:22.629948 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.629954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-13 00:54:22.629965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-13 00:54:22.629972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-13 00:54:22.629978 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.629987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-13 00:54:22.629993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-13 00:54:22.630048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-13 00:54:22.630056 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.630067 | orchestrator | 2026-03-13 00:54:22.630074 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-03-13 00:54:22.630081 | orchestrator | Friday 13 March 2026 00:53:06 +0000 (0:00:00.814) 0:04:54.559 ********** 2026-03-13 00:54:22.630087 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.630094 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.630101 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.630108 | orchestrator | 2026-03-13 00:54:22.630114 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-03-13 00:54:22.630121 | orchestrator | Friday 13 March 2026 00:53:06 +0000 (0:00:00.698) 0:04:55.258 ********** 2026-03-13 00:54:22.630128 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.630134 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.630141 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.630148 | orchestrator | 2026-03-13 00:54:22.630154 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-03-13 00:54:22.630161 | orchestrator | Friday 13 March 2026 00:53:08 +0000 (0:00:01.098) 0:04:56.356 ********** 2026-03-13 00:54:22.630168 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:54:22.630175 | orchestrator | 2026-03-13 00:54:22.630181 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-03-13 00:54:22.630188 | orchestrator | Friday 13 March 2026 00:53:09 +0000 (0:00:01.361) 0:04:57.717 ********** 2026-03-13 00:54:22.630193 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-13 00:54:22.630200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-13 00:54:22.630212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 00:54:22.630219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 00:54:22.630229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-13 00:54:22.630257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-13 00:54:22.630265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-13 00:54:22.630272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 00:54:22.630278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 00:54:22.630290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-13 00:54:22.630297 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-13 00:54:22.630306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-13 00:54:22.630331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 00:54:22.630340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 00:54:22.630347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-13 00:54:22.630354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-13 00:54:22.630368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-13 00:54:22.630378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 00:54:22.630388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 00:54:22.630395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-13 00:54:22.630402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-13 00:54:22.630416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-13 00:54:22.630423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 00:54:22.630430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 00:54:22.630440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-13 00:54:22.630454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-13 00:54:22.630462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-13 00:54:22.630475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 00:54:22.630481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 00:54:22.630488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-13 00:54:22.630495 | orchestrator | 2026-03-13 00:54:22.630501 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-03-13 00:54:22.630508 | orchestrator | Friday 13 March 2026 00:53:14 +0000 (0:00:04.688) 0:05:02.406 ********** 2026-03-13 00:54:22.630523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-13 00:54:22.630530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-13 00:54:22.630537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 00:54:22.630550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 00:54:22.630556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-13 00:54:22.630564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-13 00:54:22.630577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-13 00:54:22.630587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 00:54:22.630594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-13 00:54:22.630605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 00:54:22.630612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-13 00:54:22.630619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-13 00:54:22.630625 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.630632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 00:54:22.630642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 00:54:22.630653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-13 00:54:22.630660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-13 00:54:22.630672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-13 00:54:22.630679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 00:54:22.630686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 00:54:22.630697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-13 00:54:22.630708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-13 00:54:22.630715 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.630722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-13 00:54:22.630734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 00:54:22.630741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 00:54:22.630747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-13 00:54:22.630754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-13 00:54:22.630769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-13 00:54:22.630776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 00:54:22.630789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 00:54:22.630796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-13 00:54:22.630802 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.630809 | orchestrator | 2026-03-13 00:54:22.630815 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-03-13 00:54:22.630822 | orchestrator | Friday 13 March 2026 00:53:15 +0000 (0:00:00.897) 0:05:03.304 ********** 2026-03-13 00:54:22.630828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-13 00:54:22.630835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-13 00:54:22.630842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-13 00:54:22.630879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-13 00:54:22.630886 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.630892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-13 00:54:22.630898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-13 00:54:22.630905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-13 00:54:22.630916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-13 00:54:22.630922 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.630943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-13 00:54:22.630951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-13 00:54:22.630958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-13 00:54:22.630969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-13 00:54:22.630975 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.630982 | orchestrator | 2026-03-13 00:54:22.630988 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-03-13 00:54:22.630995 | orchestrator | Friday 13 March 2026 00:53:15 +0000 (0:00:00.901) 0:05:04.205 ********** 2026-03-13 00:54:22.631001 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.631008 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.631014 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.631021 | orchestrator | 2026-03-13 00:54:22.631027 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-03-13 00:54:22.631034 | orchestrator | Friday 13 March 2026 00:53:16 +0000 (0:00:00.398) 0:05:04.603 ********** 2026-03-13 00:54:22.631040 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.631047 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.631054 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.631060 | orchestrator | 2026-03-13 00:54:22.631067 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-03-13 00:54:22.631073 | orchestrator | Friday 13 March 2026 00:53:17 +0000 (0:00:01.262) 0:05:05.866 ********** 2026-03-13 00:54:22.631080 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:54:22.631086 | orchestrator | 2026-03-13 00:54:22.631092 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-03-13 00:54:22.631100 | orchestrator | Friday 13 March 2026 00:53:19 +0000 (0:00:01.559) 0:05:07.425 ********** 2026-03-13 00:54:22.631107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-13 00:54:22.631119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-13 00:54:22.631138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-13 00:54:22.631145 | orchestrator | 2026-03-13 00:54:22.631152 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-03-13 00:54:22.631159 | orchestrator | Friday 13 March 2026 00:53:21 +0000 (0:00:02.358) 0:05:09.783 ********** 2026-03-13 00:54:22.631165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-13 00:54:22.631173 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.631180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-13 00:54:22.631187 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.631197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-13 00:54:22.631209 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.631215 | orchestrator | 2026-03-13 00:54:22.631222 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-03-13 00:54:22.631232 | orchestrator | Friday 13 March 2026 00:53:22 +0000 (0:00:00.560) 0:05:10.344 ********** 2026-03-13 00:54:22.631239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-13 00:54:22.631245 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.631252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-13 00:54:22.631258 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.631265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-13 00:54:22.631271 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.631277 | orchestrator | 2026-03-13 00:54:22.631284 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-03-13 00:54:22.631290 | orchestrator | Friday 13 March 2026 00:53:22 +0000 (0:00:00.572) 0:05:10.916 ********** 2026-03-13 00:54:22.631296 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.631303 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.631309 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.631315 | orchestrator | 2026-03-13 00:54:22.631320 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-03-13 00:54:22.631326 | orchestrator | Friday 13 March 2026 00:53:23 +0000 (0:00:00.399) 0:05:11.316 ********** 2026-03-13 00:54:22.631332 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.631338 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.631344 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.631350 | orchestrator | 2026-03-13 00:54:22.631357 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-03-13 00:54:22.631363 | orchestrator | Friday 13 March 2026 00:53:24 +0000 (0:00:01.143) 0:05:12.459 ********** 2026-03-13 00:54:22.631370 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:54:22.631376 | orchestrator | 2026-03-13 00:54:22.631382 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-03-13 00:54:22.631388 | orchestrator | Friday 13 March 2026 00:53:25 +0000 (0:00:01.704) 0:05:14.163 ********** 2026-03-13 00:54:22.631395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-13 00:54:22.631408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-13 00:54:22.631426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-13 00:54:22.631434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-13 00:54:22.631441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-13 00:54:22.631454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-13 00:54:22.631461 | orchestrator | 2026-03-13 00:54:22.631467 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-03-13 00:54:22.631474 | orchestrator | Friday 13 March 2026 00:53:31 +0000 (0:00:06.106) 0:05:20.270 ********** 2026-03-13 00:54:22.631488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-13 00:54:22.631496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-13 00:54:22.631502 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.631509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-13 00:54:22.631524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-13 00:54:22.631531 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.631541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-13 00:54:22.631553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-13 00:54:22.631560 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.631567 | orchestrator | 2026-03-13 00:54:22.631573 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-03-13 00:54:22.631580 | orchestrator | Friday 13 March 2026 00:53:32 +0000 (0:00:00.639) 0:05:20.910 ********** 2026-03-13 00:54:22.631587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-13 00:54:22.631593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-13 00:54:22.631599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-13 00:54:22.631611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-13 00:54:22.631617 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.631624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-13 00:54:22.631630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-13 00:54:22.631637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-13 00:54:22.631643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-13 00:54:22.631650 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.631657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-13 00:54:22.631663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-13 00:54:22.631670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-13 00:54:22.631677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-13 00:54:22.631684 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.631690 | orchestrator | 2026-03-13 00:54:22.631697 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-03-13 00:54:22.631703 | orchestrator | Friday 13 March 2026 00:53:34 +0000 (0:00:01.622) 0:05:22.532 ********** 2026-03-13 00:54:22.631710 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:54:22.631716 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:54:22.631723 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:54:22.631729 | orchestrator | 2026-03-13 00:54:22.631735 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-03-13 00:54:22.631745 | orchestrator | Friday 13 March 2026 00:53:35 +0000 (0:00:01.349) 0:05:23.882 ********** 2026-03-13 00:54:22.631751 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:54:22.631757 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:54:22.631764 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:54:22.631770 | orchestrator | 2026-03-13 00:54:22.631777 | orchestrator | TASK [include_role : swift] **************************************************** 2026-03-13 00:54:22.631783 | orchestrator | Friday 13 March 2026 00:53:37 +0000 (0:00:02.086) 0:05:25.969 ********** 2026-03-13 00:54:22.631790 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.631796 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.631802 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.631808 | orchestrator | 2026-03-13 00:54:22.631815 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-03-13 00:54:22.631827 | orchestrator | Friday 13 March 2026 00:53:37 +0000 (0:00:00.283) 0:05:26.252 ********** 2026-03-13 00:54:22.631834 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.631840 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.631872 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.631879 | orchestrator | 2026-03-13 00:54:22.631886 | orchestrator | TASK [include_role : trove] **************************************************** 2026-03-13 00:54:22.631892 | orchestrator | Friday 13 March 2026 00:53:38 +0000 (0:00:00.283) 0:05:26.536 ********** 2026-03-13 00:54:22.631899 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.631905 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.631911 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.631918 | orchestrator | 2026-03-13 00:54:22.631925 | orchestrator | TASK [include_role : venus] **************************************************** 2026-03-13 00:54:22.631931 | orchestrator | Friday 13 March 2026 00:53:38 +0000 (0:00:00.483) 0:05:27.019 ********** 2026-03-13 00:54:22.631937 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.631944 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.631950 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.631957 | orchestrator | 2026-03-13 00:54:22.631963 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-03-13 00:54:22.631970 | orchestrator | Friday 13 March 2026 00:53:39 +0000 (0:00:00.278) 0:05:27.298 ********** 2026-03-13 00:54:22.631976 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.631983 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.631990 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.631997 | orchestrator | 2026-03-13 00:54:22.632004 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-03-13 00:54:22.632010 | orchestrator | Friday 13 March 2026 00:53:39 +0000 (0:00:00.262) 0:05:27.561 ********** 2026-03-13 00:54:22.632017 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.632024 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.632030 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.632037 | orchestrator | 2026-03-13 00:54:22.632043 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-03-13 00:54:22.632049 | orchestrator | Friday 13 March 2026 00:53:40 +0000 (0:00:00.729) 0:05:28.290 ********** 2026-03-13 00:54:22.632055 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:54:22.632062 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:54:22.632069 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:54:22.632075 | orchestrator | 2026-03-13 00:54:22.632082 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-03-13 00:54:22.632088 | orchestrator | Friday 13 March 2026 00:53:40 +0000 (0:00:00.687) 0:05:28.978 ********** 2026-03-13 00:54:22.632094 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:54:22.632101 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:54:22.632107 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:54:22.632114 | orchestrator | 2026-03-13 00:54:22.632120 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-03-13 00:54:22.632126 | orchestrator | Friday 13 March 2026 00:53:41 +0000 (0:00:00.329) 0:05:29.308 ********** 2026-03-13 00:54:22.632133 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:54:22.632139 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:54:22.632146 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:54:22.632152 | orchestrator | 2026-03-13 00:54:22.632158 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-03-13 00:54:22.632165 | orchestrator | Friday 13 March 2026 00:53:42 +0000 (0:00:01.006) 0:05:30.314 ********** 2026-03-13 00:54:22.632171 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:54:22.632177 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:54:22.632184 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:54:22.632190 | orchestrator | 2026-03-13 00:54:22.632196 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-03-13 00:54:22.632202 | orchestrator | Friday 13 March 2026 00:53:43 +0000 (0:00:01.357) 0:05:31.671 ********** 2026-03-13 00:54:22.632215 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:54:22.632221 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:54:22.632228 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:54:22.632235 | orchestrator | 2026-03-13 00:54:22.632241 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-03-13 00:54:22.632332 | orchestrator | Friday 13 March 2026 00:53:44 +0000 (0:00:01.066) 0:05:32.738 ********** 2026-03-13 00:54:22.632360 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:54:22.632367 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:54:22.632374 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:54:22.632380 | orchestrator | 2026-03-13 00:54:22.632386 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-03-13 00:54:22.632393 | orchestrator | Friday 13 March 2026 00:53:53 +0000 (0:00:09.315) 0:05:42.053 ********** 2026-03-13 00:54:22.632399 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:54:22.632405 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:54:22.632411 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:54:22.632417 | orchestrator | 2026-03-13 00:54:22.632427 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-03-13 00:54:22.632434 | orchestrator | Friday 13 March 2026 00:53:54 +0000 (0:00:00.725) 0:05:42.778 ********** 2026-03-13 00:54:22.632441 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:54:22.632447 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:54:22.632453 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:54:22.632459 | orchestrator | 2026-03-13 00:54:22.632466 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-03-13 00:54:22.632472 | orchestrator | Friday 13 March 2026 00:54:07 +0000 (0:00:13.238) 0:05:56.017 ********** 2026-03-13 00:54:22.632478 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:54:22.632492 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:54:22.632499 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:54:22.632505 | orchestrator | 2026-03-13 00:54:22.632512 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-03-13 00:54:22.632518 | orchestrator | Friday 13 March 2026 00:54:08 +0000 (0:00:00.754) 0:05:56.771 ********** 2026-03-13 00:54:22.632524 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:54:22.632531 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:54:22.632537 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:54:22.632544 | orchestrator | 2026-03-13 00:54:22.632550 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-03-13 00:54:22.632556 | orchestrator | Friday 13 March 2026 00:54:12 +0000 (0:00:04.297) 0:06:01.069 ********** 2026-03-13 00:54:22.632562 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.632568 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.632574 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.632580 | orchestrator | 2026-03-13 00:54:22.632586 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-03-13 00:54:22.632593 | orchestrator | Friday 13 March 2026 00:54:13 +0000 (0:00:00.299) 0:06:01.368 ********** 2026-03-13 00:54:22.632599 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.632606 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.632612 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.632619 | orchestrator | 2026-03-13 00:54:22.632625 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-03-13 00:54:22.632632 | orchestrator | Friday 13 March 2026 00:54:13 +0000 (0:00:00.521) 0:06:01.890 ********** 2026-03-13 00:54:22.632638 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.632644 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.632651 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.632657 | orchestrator | 2026-03-13 00:54:22.632664 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-03-13 00:54:22.632670 | orchestrator | Friday 13 March 2026 00:54:13 +0000 (0:00:00.331) 0:06:02.222 ********** 2026-03-13 00:54:22.632677 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.632683 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.632695 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.632701 | orchestrator | 2026-03-13 00:54:22.632707 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-03-13 00:54:22.632713 | orchestrator | Friday 13 March 2026 00:54:14 +0000 (0:00:00.322) 0:06:02.545 ********** 2026-03-13 00:54:22.632718 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.632724 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.632730 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.632736 | orchestrator | 2026-03-13 00:54:22.632742 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-03-13 00:54:22.632748 | orchestrator | Friday 13 March 2026 00:54:14 +0000 (0:00:00.301) 0:06:02.847 ********** 2026-03-13 00:54:22.632753 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:54:22.632759 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:54:22.632764 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:54:22.632769 | orchestrator | 2026-03-13 00:54:22.632775 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-03-13 00:54:22.632780 | orchestrator | Friday 13 March 2026 00:54:14 +0000 (0:00:00.307) 0:06:03.154 ********** 2026-03-13 00:54:22.632786 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:54:22.632793 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:54:22.632799 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:54:22.632805 | orchestrator | 2026-03-13 00:54:22.632811 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-03-13 00:54:22.632818 | orchestrator | Friday 13 March 2026 00:54:19 +0000 (0:00:05.041) 0:06:08.196 ********** 2026-03-13 00:54:22.632824 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:54:22.632830 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:54:22.632836 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:54:22.632842 | orchestrator | 2026-03-13 00:54:22.632907 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 00:54:22.632915 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-13 00:54:22.632922 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-13 00:54:22.632928 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-13 00:54:22.632935 | orchestrator | 2026-03-13 00:54:22.632941 | orchestrator | 2026-03-13 00:54:22.632947 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 00:54:22.632952 | orchestrator | Friday 13 March 2026 00:54:20 +0000 (0:00:00.829) 0:06:09.026 ********** 2026-03-13 00:54:22.632958 | orchestrator | =============================================================================== 2026-03-13 00:54:22.632964 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 13.24s 2026-03-13 00:54:22.632970 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 9.32s 2026-03-13 00:54:22.632976 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.11s 2026-03-13 00:54:22.632987 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 5.48s 2026-03-13 00:54:22.632993 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.44s 2026-03-13 00:54:22.632998 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 5.04s 2026-03-13 00:54:22.633003 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.93s 2026-03-13 00:54:22.633008 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.93s 2026-03-13 00:54:22.633013 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 4.86s 2026-03-13 00:54:22.633026 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 4.71s 2026-03-13 00:54:22.633032 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.69s 2026-03-13 00:54:22.633044 | orchestrator | loadbalancer : Copying over keepalived.conf ----------------------------- 4.52s 2026-03-13 00:54:22.633051 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 4.51s 2026-03-13 00:54:22.633057 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.32s 2026-03-13 00:54:22.633064 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.30s 2026-03-13 00:54:22.633070 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.20s 2026-03-13 00:54:22.633076 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 4.20s 2026-03-13 00:54:22.633083 | orchestrator | haproxy-config : Add configuration for glance when using single external frontend --- 3.83s 2026-03-13 00:54:22.633089 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 3.74s 2026-03-13 00:54:22.633095 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 3.70s 2026-03-13 00:54:22.633103 | orchestrator | 2026-03-13 00:54:22 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:54:25.667574 | orchestrator | 2026-03-13 00:54:25 | INFO  | Task a4bc5003-8dc1-4589-b3aa-f7d50e3a689e is in state STARTED 2026-03-13 00:54:25.667935 | orchestrator | 2026-03-13 00:54:25 | INFO  | Task 547da46c-916c-4b6d-90e8-fc067c5a6077 is in state STARTED 2026-03-13 00:54:25.668587 | orchestrator | 2026-03-13 00:54:25 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:54:25.669400 | orchestrator | 2026-03-13 00:54:25 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:54:28.705443 | orchestrator | 2026-03-13 00:54:28 | INFO  | Task a4bc5003-8dc1-4589-b3aa-f7d50e3a689e is in state STARTED 2026-03-13 00:54:28.705977 | orchestrator | 2026-03-13 00:54:28 | INFO  | Task 547da46c-916c-4b6d-90e8-fc067c5a6077 is in state STARTED 2026-03-13 00:54:28.706863 | orchestrator | 2026-03-13 00:54:28 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:54:28.707042 | orchestrator | 2026-03-13 00:54:28 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:54:31.731592 | orchestrator | 2026-03-13 00:54:31 | INFO  | Task a4bc5003-8dc1-4589-b3aa-f7d50e3a689e is in state STARTED 2026-03-13 00:54:31.732377 | orchestrator | 2026-03-13 00:54:31 | INFO  | Task 547da46c-916c-4b6d-90e8-fc067c5a6077 is in state STARTED 2026-03-13 00:54:31.733577 | orchestrator | 2026-03-13 00:54:31 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:54:31.733643 | orchestrator | 2026-03-13 00:54:31 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:54:34.769072 | orchestrator | 2026-03-13 00:54:34 | INFO  | Task a4bc5003-8dc1-4589-b3aa-f7d50e3a689e is in state STARTED 2026-03-13 00:54:34.769674 | orchestrator | 2026-03-13 00:54:34 | INFO  | Task 547da46c-916c-4b6d-90e8-fc067c5a6077 is in state STARTED 2026-03-13 00:54:34.770525 | orchestrator | 2026-03-13 00:54:34 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:54:34.770555 | orchestrator | 2026-03-13 00:54:34 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:54:37.805139 | orchestrator | 2026-03-13 00:54:37 | INFO  | Task a4bc5003-8dc1-4589-b3aa-f7d50e3a689e is in state STARTED 2026-03-13 00:54:37.807251 | orchestrator | 2026-03-13 00:54:37 | INFO  | Task 547da46c-916c-4b6d-90e8-fc067c5a6077 is in state STARTED 2026-03-13 00:54:37.808383 | orchestrator | 2026-03-13 00:54:37 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:54:37.808472 | orchestrator | 2026-03-13 00:54:37 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:54:40.843970 | orchestrator | 2026-03-13 00:54:40 | INFO  | Task a4bc5003-8dc1-4589-b3aa-f7d50e3a689e is in state STARTED 2026-03-13 00:54:40.844459 | orchestrator | 2026-03-13 00:54:40 | INFO  | Task 547da46c-916c-4b6d-90e8-fc067c5a6077 is in state STARTED 2026-03-13 00:54:40.847466 | orchestrator | 2026-03-13 00:54:40 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:54:40.847542 | orchestrator | 2026-03-13 00:54:40 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:54:43.892682 | orchestrator | 2026-03-13 00:54:43 | INFO  | Task a4bc5003-8dc1-4589-b3aa-f7d50e3a689e is in state STARTED 2026-03-13 00:54:43.893086 | orchestrator | 2026-03-13 00:54:43 | INFO  | Task 547da46c-916c-4b6d-90e8-fc067c5a6077 is in state STARTED 2026-03-13 00:54:43.894158 | orchestrator | 2026-03-13 00:54:43 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:54:43.894209 | orchestrator | 2026-03-13 00:54:43 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:54:46.923706 | orchestrator | 2026-03-13 00:54:46 | INFO  | Task a4bc5003-8dc1-4589-b3aa-f7d50e3a689e is in state STARTED 2026-03-13 00:54:46.924055 | orchestrator | 2026-03-13 00:54:46 | INFO  | Task 547da46c-916c-4b6d-90e8-fc067c5a6077 is in state STARTED 2026-03-13 00:54:46.924647 | orchestrator | 2026-03-13 00:54:46 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:54:46.924684 | orchestrator | 2026-03-13 00:54:46 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:54:49.969997 | orchestrator | 2026-03-13 00:54:49 | INFO  | Task a4bc5003-8dc1-4589-b3aa-f7d50e3a689e is in state STARTED 2026-03-13 00:54:49.971193 | orchestrator | 2026-03-13 00:54:49 | INFO  | Task 547da46c-916c-4b6d-90e8-fc067c5a6077 is in state STARTED 2026-03-13 00:54:49.973612 | orchestrator | 2026-03-13 00:54:49 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:54:49.973668 | orchestrator | 2026-03-13 00:54:49 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:54:53.004063 | orchestrator | 2026-03-13 00:54:53 | INFO  | Task a4bc5003-8dc1-4589-b3aa-f7d50e3a689e is in state STARTED 2026-03-13 00:54:53.004510 | orchestrator | 2026-03-13 00:54:53 | INFO  | Task 547da46c-916c-4b6d-90e8-fc067c5a6077 is in state STARTED 2026-03-13 00:54:53.005446 | orchestrator | 2026-03-13 00:54:53 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:54:53.005476 | orchestrator | 2026-03-13 00:54:53 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:54:56.077252 | orchestrator | 2026-03-13 00:54:56 | INFO  | Task a4bc5003-8dc1-4589-b3aa-f7d50e3a689e is in state STARTED 2026-03-13 00:54:56.079056 | orchestrator | 2026-03-13 00:54:56 | INFO  | Task 547da46c-916c-4b6d-90e8-fc067c5a6077 is in state STARTED 2026-03-13 00:54:56.081282 | orchestrator | 2026-03-13 00:54:56 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:54:56.081494 | orchestrator | 2026-03-13 00:54:56 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:54:59.149060 | orchestrator | 2026-03-13 00:54:59 | INFO  | Task a4bc5003-8dc1-4589-b3aa-f7d50e3a689e is in state STARTED 2026-03-13 00:54:59.153866 | orchestrator | 2026-03-13 00:54:59 | INFO  | Task 547da46c-916c-4b6d-90e8-fc067c5a6077 is in state STARTED 2026-03-13 00:54:59.160255 | orchestrator | 2026-03-13 00:54:59 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:54:59.160302 | orchestrator | 2026-03-13 00:54:59 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:55:02.207102 | orchestrator | 2026-03-13 00:55:02 | INFO  | Task a4bc5003-8dc1-4589-b3aa-f7d50e3a689e is in state STARTED 2026-03-13 00:55:02.209767 | orchestrator | 2026-03-13 00:55:02 | INFO  | Task 547da46c-916c-4b6d-90e8-fc067c5a6077 is in state STARTED 2026-03-13 00:55:02.211106 | orchestrator | 2026-03-13 00:55:02 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:55:02.211141 | orchestrator | 2026-03-13 00:55:02 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:55:05.264584 | orchestrator | 2026-03-13 00:55:05 | INFO  | Task a4bc5003-8dc1-4589-b3aa-f7d50e3a689e is in state STARTED 2026-03-13 00:55:05.266775 | orchestrator | 2026-03-13 00:55:05 | INFO  | Task 547da46c-916c-4b6d-90e8-fc067c5a6077 is in state STARTED 2026-03-13 00:55:05.269693 | orchestrator | 2026-03-13 00:55:05 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:55:05.269759 | orchestrator | 2026-03-13 00:55:05 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:55:08.322679 | orchestrator | 2026-03-13 00:55:08 | INFO  | Task a4bc5003-8dc1-4589-b3aa-f7d50e3a689e is in state STARTED 2026-03-13 00:55:08.324292 | orchestrator | 2026-03-13 00:55:08 | INFO  | Task 547da46c-916c-4b6d-90e8-fc067c5a6077 is in state STARTED 2026-03-13 00:55:08.326006 | orchestrator | 2026-03-13 00:55:08 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:55:08.326125 | orchestrator | 2026-03-13 00:55:08 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:55:11.367672 | orchestrator | 2026-03-13 00:55:11 | INFO  | Task a4bc5003-8dc1-4589-b3aa-f7d50e3a689e is in state STARTED 2026-03-13 00:55:11.371352 | orchestrator | 2026-03-13 00:55:11 | INFO  | Task 547da46c-916c-4b6d-90e8-fc067c5a6077 is in state STARTED 2026-03-13 00:55:11.373801 | orchestrator | 2026-03-13 00:55:11 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:55:11.374173 | orchestrator | 2026-03-13 00:55:11 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:55:14.434364 | orchestrator | 2026-03-13 00:55:14 | INFO  | Task a4bc5003-8dc1-4589-b3aa-f7d50e3a689e is in state STARTED 2026-03-13 00:55:14.436612 | orchestrator | 2026-03-13 00:55:14 | INFO  | Task 547da46c-916c-4b6d-90e8-fc067c5a6077 is in state STARTED 2026-03-13 00:55:14.438735 | orchestrator | 2026-03-13 00:55:14 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:55:14.439007 | orchestrator | 2026-03-13 00:55:14 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:55:17.492991 | orchestrator | 2026-03-13 00:55:17 | INFO  | Task a4bc5003-8dc1-4589-b3aa-f7d50e3a689e is in state STARTED 2026-03-13 00:55:17.495022 | orchestrator | 2026-03-13 00:55:17 | INFO  | Task 547da46c-916c-4b6d-90e8-fc067c5a6077 is in state STARTED 2026-03-13 00:55:17.497281 | orchestrator | 2026-03-13 00:55:17 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:55:17.497388 | orchestrator | 2026-03-13 00:55:17 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:55:20.538178 | orchestrator | 2026-03-13 00:55:20 | INFO  | Task a4bc5003-8dc1-4589-b3aa-f7d50e3a689e is in state STARTED 2026-03-13 00:55:20.540455 | orchestrator | 2026-03-13 00:55:20 | INFO  | Task 547da46c-916c-4b6d-90e8-fc067c5a6077 is in state STARTED 2026-03-13 00:55:20.542925 | orchestrator | 2026-03-13 00:55:20 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:55:20.543375 | orchestrator | 2026-03-13 00:55:20 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:55:23.595106 | orchestrator | 2026-03-13 00:55:23 | INFO  | Task a4bc5003-8dc1-4589-b3aa-f7d50e3a689e is in state STARTED 2026-03-13 00:55:23.599280 | orchestrator | 2026-03-13 00:55:23 | INFO  | Task 547da46c-916c-4b6d-90e8-fc067c5a6077 is in state STARTED 2026-03-13 00:55:23.603632 | orchestrator | 2026-03-13 00:55:23 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:55:23.604221 | orchestrator | 2026-03-13 00:55:23 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:55:26.656098 | orchestrator | 2026-03-13 00:55:26 | INFO  | Task a4bc5003-8dc1-4589-b3aa-f7d50e3a689e is in state STARTED 2026-03-13 00:55:26.656982 | orchestrator | 2026-03-13 00:55:26 | INFO  | Task 547da46c-916c-4b6d-90e8-fc067c5a6077 is in state STARTED 2026-03-13 00:55:26.658836 | orchestrator | 2026-03-13 00:55:26 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:55:26.658882 | orchestrator | 2026-03-13 00:55:26 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:55:29.701058 | orchestrator | 2026-03-13 00:55:29 | INFO  | Task a4bc5003-8dc1-4589-b3aa-f7d50e3a689e is in state STARTED 2026-03-13 00:55:29.701153 | orchestrator | 2026-03-13 00:55:29 | INFO  | Task 547da46c-916c-4b6d-90e8-fc067c5a6077 is in state STARTED 2026-03-13 00:55:29.702113 | orchestrator | 2026-03-13 00:55:29 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:55:29.702163 | orchestrator | 2026-03-13 00:55:29 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:55:32.748243 | orchestrator | 2026-03-13 00:55:32 | INFO  | Task a4bc5003-8dc1-4589-b3aa-f7d50e3a689e is in state STARTED 2026-03-13 00:55:32.750480 | orchestrator | 2026-03-13 00:55:32 | INFO  | Task 547da46c-916c-4b6d-90e8-fc067c5a6077 is in state STARTED 2026-03-13 00:55:32.751298 | orchestrator | 2026-03-13 00:55:32 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:55:32.751349 | orchestrator | 2026-03-13 00:55:32 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:55:35.791830 | orchestrator | 2026-03-13 00:55:35 | INFO  | Task a4bc5003-8dc1-4589-b3aa-f7d50e3a689e is in state STARTED 2026-03-13 00:55:35.793454 | orchestrator | 2026-03-13 00:55:35 | INFO  | Task 547da46c-916c-4b6d-90e8-fc067c5a6077 is in state STARTED 2026-03-13 00:55:35.793790 | orchestrator | 2026-03-13 00:55:35 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:55:35.793988 | orchestrator | 2026-03-13 00:55:35 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:55:38.831207 | orchestrator | 2026-03-13 00:55:38 | INFO  | Task a4bc5003-8dc1-4589-b3aa-f7d50e3a689e is in state STARTED 2026-03-13 00:55:38.833100 | orchestrator | 2026-03-13 00:55:38 | INFO  | Task 547da46c-916c-4b6d-90e8-fc067c5a6077 is in state STARTED 2026-03-13 00:55:38.834501 | orchestrator | 2026-03-13 00:55:38 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:55:38.834535 | orchestrator | 2026-03-13 00:55:38 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:55:41.876484 | orchestrator | 2026-03-13 00:55:41 | INFO  | Task a4bc5003-8dc1-4589-b3aa-f7d50e3a689e is in state STARTED 2026-03-13 00:55:41.877164 | orchestrator | 2026-03-13 00:55:41 | INFO  | Task 547da46c-916c-4b6d-90e8-fc067c5a6077 is in state STARTED 2026-03-13 00:55:41.877858 | orchestrator | 2026-03-13 00:55:41 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:55:41.877890 | orchestrator | 2026-03-13 00:55:41 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:55:44.917990 | orchestrator | 2026-03-13 00:55:44 | INFO  | Task a4bc5003-8dc1-4589-b3aa-f7d50e3a689e is in state STARTED 2026-03-13 00:55:44.920601 | orchestrator | 2026-03-13 00:55:44 | INFO  | Task 547da46c-916c-4b6d-90e8-fc067c5a6077 is in state STARTED 2026-03-13 00:55:44.920648 | orchestrator | 2026-03-13 00:55:44 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:55:44.920669 | orchestrator | 2026-03-13 00:55:44 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:55:47.956587 | orchestrator | 2026-03-13 00:55:47 | INFO  | Task a4bc5003-8dc1-4589-b3aa-f7d50e3a689e is in state STARTED 2026-03-13 00:55:47.958126 | orchestrator | 2026-03-13 00:55:47 | INFO  | Task 547da46c-916c-4b6d-90e8-fc067c5a6077 is in state STARTED 2026-03-13 00:55:47.960204 | orchestrator | 2026-03-13 00:55:47 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:55:47.960250 | orchestrator | 2026-03-13 00:55:47 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:55:50.999100 | orchestrator | 2026-03-13 00:55:50 | INFO  | Task a4bc5003-8dc1-4589-b3aa-f7d50e3a689e is in state STARTED 2026-03-13 00:55:51.000512 | orchestrator | 2026-03-13 00:55:51 | INFO  | Task 547da46c-916c-4b6d-90e8-fc067c5a6077 is in state STARTED 2026-03-13 00:55:51.002639 | orchestrator | 2026-03-13 00:55:51 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:55:51.006617 | orchestrator | 2026-03-13 00:55:51 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:55:54.049484 | orchestrator | 2026-03-13 00:55:54 | INFO  | Task a4bc5003-8dc1-4589-b3aa-f7d50e3a689e is in state STARTED 2026-03-13 00:55:54.051461 | orchestrator | 2026-03-13 00:55:54 | INFO  | Task 547da46c-916c-4b6d-90e8-fc067c5a6077 is in state STARTED 2026-03-13 00:55:54.053871 | orchestrator | 2026-03-13 00:55:54 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:55:54.053941 | orchestrator | 2026-03-13 00:55:54 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:55:57.103878 | orchestrator | 2026-03-13 00:55:57 | INFO  | Task a4bc5003-8dc1-4589-b3aa-f7d50e3a689e is in state STARTED 2026-03-13 00:55:57.105003 | orchestrator | 2026-03-13 00:55:57 | INFO  | Task 547da46c-916c-4b6d-90e8-fc067c5a6077 is in state STARTED 2026-03-13 00:55:57.107079 | orchestrator | 2026-03-13 00:55:57 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:55:57.107461 | orchestrator | 2026-03-13 00:55:57 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:56:00.143894 | orchestrator | 2026-03-13 00:56:00 | INFO  | Task a4bc5003-8dc1-4589-b3aa-f7d50e3a689e is in state STARTED 2026-03-13 00:56:00.144341 | orchestrator | 2026-03-13 00:56:00 | INFO  | Task 547da46c-916c-4b6d-90e8-fc067c5a6077 is in state STARTED 2026-03-13 00:56:00.145646 | orchestrator | 2026-03-13 00:56:00 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:56:00.145684 | orchestrator | 2026-03-13 00:56:00 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:56:03.186253 | orchestrator | 2026-03-13 00:56:03 | INFO  | Task a4bc5003-8dc1-4589-b3aa-f7d50e3a689e is in state STARTED 2026-03-13 00:56:03.187187 | orchestrator | 2026-03-13 00:56:03 | INFO  | Task 547da46c-916c-4b6d-90e8-fc067c5a6077 is in state STARTED 2026-03-13 00:56:03.187822 | orchestrator | 2026-03-13 00:56:03 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:56:03.187854 | orchestrator | 2026-03-13 00:56:03 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:56:06.237443 | orchestrator | 2026-03-13 00:56:06 | INFO  | Task a4bc5003-8dc1-4589-b3aa-f7d50e3a689e is in state STARTED 2026-03-13 00:56:06.240288 | orchestrator | 2026-03-13 00:56:06 | INFO  | Task 547da46c-916c-4b6d-90e8-fc067c5a6077 is in state STARTED 2026-03-13 00:56:06.242097 | orchestrator | 2026-03-13 00:56:06 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:56:06.242191 | orchestrator | 2026-03-13 00:56:06 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:56:09.287468 | orchestrator | 2026-03-13 00:56:09 | INFO  | Task a4bc5003-8dc1-4589-b3aa-f7d50e3a689e is in state STARTED 2026-03-13 00:56:09.288942 | orchestrator | 2026-03-13 00:56:09 | INFO  | Task 547da46c-916c-4b6d-90e8-fc067c5a6077 is in state STARTED 2026-03-13 00:56:09.290681 | orchestrator | 2026-03-13 00:56:09 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:56:09.290849 | orchestrator | 2026-03-13 00:56:09 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:56:12.341483 | orchestrator | 2026-03-13 00:56:12 | INFO  | Task a4bc5003-8dc1-4589-b3aa-f7d50e3a689e is in state STARTED 2026-03-13 00:56:12.342661 | orchestrator | 2026-03-13 00:56:12 | INFO  | Task 547da46c-916c-4b6d-90e8-fc067c5a6077 is in state STARTED 2026-03-13 00:56:12.344156 | orchestrator | 2026-03-13 00:56:12 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:56:12.344236 | orchestrator | 2026-03-13 00:56:12 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:56:15.407579 | orchestrator | 2026-03-13 00:56:15 | INFO  | Task a4bc5003-8dc1-4589-b3aa-f7d50e3a689e is in state STARTED 2026-03-13 00:56:15.409479 | orchestrator | 2026-03-13 00:56:15 | INFO  | Task 547da46c-916c-4b6d-90e8-fc067c5a6077 is in state STARTED 2026-03-13 00:56:15.411938 | orchestrator | 2026-03-13 00:56:15 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:56:15.412329 | orchestrator | 2026-03-13 00:56:15 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:56:18.461742 | orchestrator | 2026-03-13 00:56:18 | INFO  | Task a4bc5003-8dc1-4589-b3aa-f7d50e3a689e is in state STARTED 2026-03-13 00:56:18.462773 | orchestrator | 2026-03-13 00:56:18 | INFO  | Task 547da46c-916c-4b6d-90e8-fc067c5a6077 is in state STARTED 2026-03-13 00:56:18.465043 | orchestrator | 2026-03-13 00:56:18 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:56:18.465095 | orchestrator | 2026-03-13 00:56:18 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:56:21.506110 | orchestrator | 2026-03-13 00:56:21 | INFO  | Task a4bc5003-8dc1-4589-b3aa-f7d50e3a689e is in state STARTED 2026-03-13 00:56:21.507814 | orchestrator | 2026-03-13 00:56:21 | INFO  | Task 547da46c-916c-4b6d-90e8-fc067c5a6077 is in state STARTED 2026-03-13 00:56:21.509727 | orchestrator | 2026-03-13 00:56:21 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:56:21.509767 | orchestrator | 2026-03-13 00:56:21 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:56:24.553441 | orchestrator | 2026-03-13 00:56:24 | INFO  | Task a4bc5003-8dc1-4589-b3aa-f7d50e3a689e is in state STARTED 2026-03-13 00:56:24.555247 | orchestrator | 2026-03-13 00:56:24 | INFO  | Task 547da46c-916c-4b6d-90e8-fc067c5a6077 is in state STARTED 2026-03-13 00:56:24.556954 | orchestrator | 2026-03-13 00:56:24 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:56:24.557012 | orchestrator | 2026-03-13 00:56:24 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:56:27.602140 | orchestrator | 2026-03-13 00:56:27 | INFO  | Task a4bc5003-8dc1-4589-b3aa-f7d50e3a689e is in state STARTED 2026-03-13 00:56:27.604825 | orchestrator | 2026-03-13 00:56:27 | INFO  | Task 547da46c-916c-4b6d-90e8-fc067c5a6077 is in state STARTED 2026-03-13 00:56:27.607636 | orchestrator | 2026-03-13 00:56:27 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:56:27.608027 | orchestrator | 2026-03-13 00:56:27 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:56:30.655187 | orchestrator | 2026-03-13 00:56:30 | INFO  | Task a4bc5003-8dc1-4589-b3aa-f7d50e3a689e is in state STARTED 2026-03-13 00:56:30.657647 | orchestrator | 2026-03-13 00:56:30 | INFO  | Task 547da46c-916c-4b6d-90e8-fc067c5a6077 is in state STARTED 2026-03-13 00:56:30.659884 | orchestrator | 2026-03-13 00:56:30 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:56:30.659930 | orchestrator | 2026-03-13 00:56:30 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:56:33.702251 | orchestrator | 2026-03-13 00:56:33 | INFO  | Task a4bc5003-8dc1-4589-b3aa-f7d50e3a689e is in state STARTED 2026-03-13 00:56:33.704567 | orchestrator | 2026-03-13 00:56:33 | INFO  | Task 547da46c-916c-4b6d-90e8-fc067c5a6077 is in state STARTED 2026-03-13 00:56:33.707944 | orchestrator | 2026-03-13 00:56:33 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:56:33.708016 | orchestrator | 2026-03-13 00:56:33 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:56:36.744485 | orchestrator | 2026-03-13 00:56:36 | INFO  | Task a4bc5003-8dc1-4589-b3aa-f7d50e3a689e is in state STARTED 2026-03-13 00:56:36.745926 | orchestrator | 2026-03-13 00:56:36 | INFO  | Task 547da46c-916c-4b6d-90e8-fc067c5a6077 is in state STARTED 2026-03-13 00:56:36.747580 | orchestrator | 2026-03-13 00:56:36 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:56:36.747634 | orchestrator | 2026-03-13 00:56:36 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:56:39.795505 | orchestrator | 2026-03-13 00:56:39 | INFO  | Task a4bc5003-8dc1-4589-b3aa-f7d50e3a689e is in state STARTED 2026-03-13 00:56:39.798248 | orchestrator | 2026-03-13 00:56:39 | INFO  | Task 547da46c-916c-4b6d-90e8-fc067c5a6077 is in state STARTED 2026-03-13 00:56:39.800633 | orchestrator | 2026-03-13 00:56:39 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:56:39.800736 | orchestrator | 2026-03-13 00:56:39 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:56:42.845550 | orchestrator | 2026-03-13 00:56:42 | INFO  | Task a4bc5003-8dc1-4589-b3aa-f7d50e3a689e is in state STARTED 2026-03-13 00:56:42.848298 | orchestrator | 2026-03-13 00:56:42 | INFO  | Task 547da46c-916c-4b6d-90e8-fc067c5a6077 is in state STARTED 2026-03-13 00:56:42.852445 | orchestrator | 2026-03-13 00:56:42 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:56:42.852512 | orchestrator | 2026-03-13 00:56:42 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:56:45.903443 | orchestrator | 2026-03-13 00:56:45 | INFO  | Task a4bc5003-8dc1-4589-b3aa-f7d50e3a689e is in state STARTED 2026-03-13 00:56:45.904562 | orchestrator | 2026-03-13 00:56:45 | INFO  | Task 547da46c-916c-4b6d-90e8-fc067c5a6077 is in state STARTED 2026-03-13 00:56:45.906980 | orchestrator | 2026-03-13 00:56:45 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state STARTED 2026-03-13 00:56:45.907758 | orchestrator | 2026-03-13 00:56:45 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:56:48.960320 | orchestrator | 2026-03-13 00:56:48 | INFO  | Task a4bc5003-8dc1-4589-b3aa-f7d50e3a689e is in state STARTED 2026-03-13 00:56:48.962242 | orchestrator | 2026-03-13 00:56:48 | INFO  | Task 547da46c-916c-4b6d-90e8-fc067c5a6077 is in state STARTED 2026-03-13 00:56:48.967566 | orchestrator | 2026-03-13 00:56:48 | INFO  | Task 4e6a7a44-c691-4b4f-9239-335f7552e57b is in state SUCCESS 2026-03-13 00:56:48.969584 | orchestrator | 2026-03-13 00:56:48.969695 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-13 00:56:48.969717 | orchestrator | 2.16.14 2026-03-13 00:56:48.969722 | orchestrator | 2026-03-13 00:56:48.969726 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-03-13 00:56:48.969731 | orchestrator | 2026-03-13 00:56:48.969734 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-13 00:56:48.969738 | orchestrator | Friday 13 March 2026 00:46:01 +0000 (0:00:00.827) 0:00:00.827 ********** 2026-03-13 00:56:48.969743 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:56:48.969748 | orchestrator | 2026-03-13 00:56:48.969751 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-13 00:56:48.969755 | orchestrator | Friday 13 March 2026 00:46:02 +0000 (0:00:01.041) 0:00:01.869 ********** 2026-03-13 00:56:48.969759 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.969764 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.969767 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.969771 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.969775 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.969779 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.969782 | orchestrator | 2026-03-13 00:56:48.969786 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-13 00:56:48.969790 | orchestrator | Friday 13 March 2026 00:46:04 +0000 (0:00:01.597) 0:00:03.467 ********** 2026-03-13 00:56:48.969794 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.969797 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.969801 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.969805 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.969809 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.969813 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.969816 | orchestrator | 2026-03-13 00:56:48.969820 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-13 00:56:48.969831 | orchestrator | Friday 13 March 2026 00:46:05 +0000 (0:00:00.941) 0:00:04.408 ********** 2026-03-13 00:56:48.969835 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.969922 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.969927 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.969950 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.969955 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.969959 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.969963 | orchestrator | 2026-03-13 00:56:48.969967 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-13 00:56:48.969971 | orchestrator | Friday 13 March 2026 00:46:06 +0000 (0:00:01.061) 0:00:05.470 ********** 2026-03-13 00:56:48.969975 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.969979 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.969982 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.969986 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.969990 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.969994 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.969997 | orchestrator | 2026-03-13 00:56:48.970001 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-13 00:56:48.970005 | orchestrator | Friday 13 March 2026 00:46:07 +0000 (0:00:00.716) 0:00:06.186 ********** 2026-03-13 00:56:48.970009 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.970189 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.970194 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.970198 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.970202 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.970206 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.970210 | orchestrator | 2026-03-13 00:56:48.970213 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-13 00:56:48.970217 | orchestrator | Friday 13 March 2026 00:46:07 +0000 (0:00:00.478) 0:00:06.664 ********** 2026-03-13 00:56:48.970221 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.970225 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.970228 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.970237 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.970241 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.970245 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.970249 | orchestrator | 2026-03-13 00:56:48.970252 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-13 00:56:48.970256 | orchestrator | Friday 13 March 2026 00:46:08 +0000 (0:00:00.813) 0:00:07.478 ********** 2026-03-13 00:56:48.970260 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.970264 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.970268 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.970271 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.970298 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.970303 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.970307 | orchestrator | 2026-03-13 00:56:48.970311 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-13 00:56:48.970314 | orchestrator | Friday 13 March 2026 00:46:09 +0000 (0:00:00.857) 0:00:08.335 ********** 2026-03-13 00:56:48.970318 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.970322 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.970325 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.970329 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.970333 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.970336 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.970340 | orchestrator | 2026-03-13 00:56:48.970344 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-13 00:56:48.970471 | orchestrator | Friday 13 March 2026 00:46:11 +0000 (0:00:02.213) 0:00:10.548 ********** 2026-03-13 00:56:48.970477 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-13 00:56:48.970481 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-13 00:56:48.970510 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-13 00:56:48.970515 | orchestrator | 2026-03-13 00:56:48.970519 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-13 00:56:48.970523 | orchestrator | Friday 13 March 2026 00:46:12 +0000 (0:00:00.838) 0:00:11.387 ********** 2026-03-13 00:56:48.970527 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.970530 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.970534 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.970554 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.970558 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.970562 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.970566 | orchestrator | 2026-03-13 00:56:48.970570 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-13 00:56:48.970573 | orchestrator | Friday 13 March 2026 00:46:13 +0000 (0:00:01.415) 0:00:12.803 ********** 2026-03-13 00:56:48.970577 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-13 00:56:48.970581 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-13 00:56:48.970585 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-13 00:56:48.970588 | orchestrator | 2026-03-13 00:56:48.970592 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-13 00:56:48.970596 | orchestrator | Friday 13 March 2026 00:46:16 +0000 (0:00:02.373) 0:00:15.177 ********** 2026-03-13 00:56:48.970599 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-13 00:56:48.970603 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-13 00:56:48.970622 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-13 00:56:48.970626 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.970689 | orchestrator | 2026-03-13 00:56:48.970694 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-13 00:56:48.970698 | orchestrator | Friday 13 March 2026 00:46:16 +0000 (0:00:00.516) 0:00:15.693 ********** 2026-03-13 00:56:48.970732 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.970899 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.970905 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.970909 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.970913 | orchestrator | 2026-03-13 00:56:48.970917 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-13 00:56:48.970921 | orchestrator | Friday 13 March 2026 00:46:17 +0000 (0:00:00.775) 0:00:16.469 ********** 2026-03-13 00:56:48.970926 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.970932 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.970936 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.970940 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.970944 | orchestrator | 2026-03-13 00:56:48.971089 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-13 00:56:48.971097 | orchestrator | Friday 13 March 2026 00:46:18 +0000 (0:00:00.939) 0:00:17.408 ********** 2026-03-13 00:56:48.971144 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-13 00:46:14.594001', 'end': '2026-03-13 00:46:14.687742', 'delta': '0:00:00.093741', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.971153 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-13 00:46:15.264938', 'end': '2026-03-13 00:46:15.372475', 'delta': '0:00:00.107537', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.971170 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-13 00:46:15.830007', 'end': '2026-03-13 00:46:15.939245', 'delta': '0:00:00.109238', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.971175 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.971179 | orchestrator | 2026-03-13 00:56:48.971183 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-13 00:56:48.971186 | orchestrator | Friday 13 March 2026 00:46:18 +0000 (0:00:00.410) 0:00:17.819 ********** 2026-03-13 00:56:48.971190 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.971194 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.971198 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.971202 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.971206 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.971209 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.971213 | orchestrator | 2026-03-13 00:56:48.971217 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-13 00:56:48.971248 | orchestrator | Friday 13 March 2026 00:46:20 +0000 (0:00:01.957) 0:00:19.776 ********** 2026-03-13 00:56:48.971253 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-13 00:56:48.971257 | orchestrator | 2026-03-13 00:56:48.971261 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-13 00:56:48.971264 | orchestrator | Friday 13 March 2026 00:46:21 +0000 (0:00:00.780) 0:00:20.557 ********** 2026-03-13 00:56:48.971268 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.971272 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.971276 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.971280 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.971283 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.971287 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.971291 | orchestrator | 2026-03-13 00:56:48.971294 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-13 00:56:48.971298 | orchestrator | Friday 13 March 2026 00:46:23 +0000 (0:00:01.953) 0:00:22.511 ********** 2026-03-13 00:56:48.971302 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.971306 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.971309 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.971313 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.971317 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.971320 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.971324 | orchestrator | 2026-03-13 00:56:48.971328 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-13 00:56:48.971365 | orchestrator | Friday 13 March 2026 00:46:25 +0000 (0:00:01.634) 0:00:24.145 ********** 2026-03-13 00:56:48.971370 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.971374 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.971378 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.971382 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.971385 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.971389 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.971393 | orchestrator | 2026-03-13 00:56:48.971401 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-13 00:56:48.971405 | orchestrator | Friday 13 March 2026 00:46:26 +0000 (0:00:01.347) 0:00:25.493 ********** 2026-03-13 00:56:48.971409 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.971413 | orchestrator | 2026-03-13 00:56:48.971417 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-13 00:56:48.971420 | orchestrator | Friday 13 March 2026 00:46:26 +0000 (0:00:00.208) 0:00:25.701 ********** 2026-03-13 00:56:48.971425 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.971432 | orchestrator | 2026-03-13 00:56:48.971442 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-13 00:56:48.972331 | orchestrator | Friday 13 March 2026 00:46:27 +0000 (0:00:00.306) 0:00:26.008 ********** 2026-03-13 00:56:48.972355 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.972361 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.972367 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.972431 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.972442 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.972447 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.972454 | orchestrator | 2026-03-13 00:56:48.972460 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-13 00:56:48.972467 | orchestrator | Friday 13 March 2026 00:46:27 +0000 (0:00:00.693) 0:00:26.701 ********** 2026-03-13 00:56:48.972472 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.972478 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.972484 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.972489 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.972496 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.972501 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.972507 | orchestrator | 2026-03-13 00:56:48.972514 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-13 00:56:48.972520 | orchestrator | Friday 13 March 2026 00:46:28 +0000 (0:00:00.787) 0:00:27.488 ********** 2026-03-13 00:56:48.972526 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.972532 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.972538 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.972543 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.972550 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.972556 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.972563 | orchestrator | 2026-03-13 00:56:48.972569 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-13 00:56:48.972575 | orchestrator | Friday 13 March 2026 00:46:29 +0000 (0:00:00.752) 0:00:28.240 ********** 2026-03-13 00:56:48.972581 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.972587 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.972594 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.972600 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.972605 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.972611 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.972617 | orchestrator | 2026-03-13 00:56:48.972623 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-13 00:56:48.972657 | orchestrator | Friday 13 March 2026 00:46:30 +0000 (0:00:01.360) 0:00:29.601 ********** 2026-03-13 00:56:48.972662 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.972666 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.972670 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.972674 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.972677 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.972681 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.972685 | orchestrator | 2026-03-13 00:56:48.972689 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-13 00:56:48.972692 | orchestrator | Friday 13 March 2026 00:46:31 +0000 (0:00:00.822) 0:00:30.423 ********** 2026-03-13 00:56:48.972702 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.972706 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.972709 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.972713 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.972717 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.972720 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.972724 | orchestrator | 2026-03-13 00:56:48.972728 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-13 00:56:48.972732 | orchestrator | Friday 13 March 2026 00:46:32 +0000 (0:00:00.616) 0:00:31.040 ********** 2026-03-13 00:56:48.972736 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.972739 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.972743 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.972747 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.972752 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.972758 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.972764 | orchestrator | 2026-03-13 00:56:48.972769 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-13 00:56:48.972791 | orchestrator | Friday 13 March 2026 00:46:32 +0000 (0:00:00.543) 0:00:31.583 ********** 2026-03-13 00:56:48.972800 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b5494c86--4b11--53e5--88ab--5da9d8a68a1e-osd--block--b5494c86--4b11--53e5--88ab--5da9d8a68a1e', 'dm-uuid-LVM-LPHl0YzeI6FamkHwpYfPFLYvA4jefdeLB0n60KxVDZol4Rt6ZGCDu50Tpw7xyBAY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-13 00:56:48.972808 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b7299377--1bbd--5436--9d58--2dd820a08a2f-osd--block--b7299377--1bbd--5436--9d58--2dd820a08a2f', 'dm-uuid-LVM-HoHeTclBr30fca9ZNFZuhsY6pk6aA3QcxtHyPjIk3J5AIumWTBgltxaGzq9CnrMA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-13 00:56:48.972875 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:56:48.972884 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:56:48.972888 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:56:48.972895 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:56:48.972903 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:56:48.972908 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--49707cb0--36ac--571b--bf56--7288c46886ca-osd--block--49707cb0--36ac--571b--bf56--7288c46886ca', 'dm-uuid-LVM-lUuxAlRxeDpKHFR330Fw0ajQMZxdmGdFcZe0ZY3SvPyxgqFjJLxezDxIRmkhNvve'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-13 00:56:48.972912 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:56:48.972916 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--798cee0b--732e--51b2--a8a3--29d8c2932297-osd--block--798cee0b--732e--51b2--a8a3--29d8c2932297', 'dm-uuid-LVM-L1OBBH0k0D00ZH0dN8uE5pJTqoWU0KZEfPq0LLMud7Q5AWoDnaD4QV1JonD11yi2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-13 00:56:48.972920 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:56:48.972955 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:56:48.972960 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:56:48.972964 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:56:48.972975 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:56:48.972982 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:56:48.972990 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_97f75f20-579f-4518-b7ce-4d90969f977d', 'scsi-SQEMU_QEMU_HARDDISK_97f75f20-579f-4518-b7ce-4d90969f977d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_97f75f20-579f-4518-b7ce-4d90969f977d-part1', 'scsi-SQEMU_QEMU_HARDDISK_97f75f20-579f-4518-b7ce-4d90969f977d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_97f75f20-579f-4518-b7ce-4d90969f977d-part14', 'scsi-SQEMU_QEMU_HARDDISK_97f75f20-579f-4518-b7ce-4d90969f977d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_97f75f20-579f-4518-b7ce-4d90969f977d-part15', 'scsi-SQEMU_QEMU_HARDDISK_97f75f20-579f-4518-b7ce-4d90969f977d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_97f75f20-579f-4518-b7ce-4d90969f977d-part16', 'scsi-SQEMU_QEMU_HARDDISK_97f75f20-579f-4518-b7ce-4d90969f977d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-13 00:56:48.973037 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:56:48.973046 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--b5494c86--4b11--53e5--88ab--5da9d8a68a1e-osd--block--b5494c86--4b11--53e5--88ab--5da9d8a68a1e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-LXpqTL-G1pt-XewF-Zt4p-vrnA-Ynye-ARUN64', 'scsi-0QEMU_QEMU_HARDDISK_0e74aa03-0dd7-4a4d-94fb-6534b2ad29b5', 'scsi-SQEMU_QEMU_HARDDISK_0e74aa03-0dd7-4a4d-94fb-6534b2ad29b5'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-13 00:56:48.973064 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:56:48.973071 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b7299377--1bbd--5436--9d58--2dd820a08a2f-osd--block--b7299377--1bbd--5436--9d58--2dd820a08a2f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xU3y9w-lno0-fYYl-h6C2-Bafl-jXiW-zSsbBh', 'scsi-0QEMU_QEMU_HARDDISK_e5b6d572-8591-43aa-97f9-3b718c2d248a', 'scsi-SQEMU_QEMU_HARDDISK_e5b6d572-8591-43aa-97f9-3b718c2d248a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-13 00:56:48.973078 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:56:48.973085 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b47ce045-806c-4f33-b887-31de2316680c', 'scsi-SQEMU_QEMU_HARDDISK_b47ce045-806c-4f33-b887-31de2316680c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-13 00:56:48.973091 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:56:48.973135 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-13-00-03-11-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-13 00:56:48.973155 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b2c3f3a5-d054-4214-843e-d9b33fe0d233', 'scsi-SQEMU_QEMU_HARDDISK_b2c3f3a5-d054-4214-843e-d9b33fe0d233'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b2c3f3a5-d054-4214-843e-d9b33fe0d233-part1', 'scsi-SQEMU_QEMU_HARDDISK_b2c3f3a5-d054-4214-843e-d9b33fe0d233-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b2c3f3a5-d054-4214-843e-d9b33fe0d233-part14', 'scsi-SQEMU_QEMU_HARDDISK_b2c3f3a5-d054-4214-843e-d9b33fe0d233-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b2c3f3a5-d054-4214-843e-d9b33fe0d233-part15', 'scsi-SQEMU_QEMU_HARDDISK_b2c3f3a5-d054-4214-843e-d9b33fe0d233-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b2c3f3a5-d054-4214-843e-d9b33fe0d233-part16', 'scsi-SQEMU_QEMU_HARDDISK_b2c3f3a5-d054-4214-843e-d9b33fe0d233-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-13 00:56:48.973164 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--49707cb0--36ac--571b--bf56--7288c46886ca-osd--block--49707cb0--36ac--571b--bf56--7288c46886ca'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JdLIXO-MqwE-lr4R-7jAl-Oajp-v9D3-BfnDcq', 'scsi-0QEMU_QEMU_HARDDISK_9a254b57-f2ae-4287-95a0-937fffba734e', 'scsi-SQEMU_QEMU_HARDDISK_9a254b57-f2ae-4287-95a0-937fffba734e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-13 00:56:48.973168 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--798cee0b--732e--51b2--a8a3--29d8c2932297-osd--block--798cee0b--732e--51b2--a8a3--29d8c2932297'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-MNWIhf-nwp0-tXaF-WOrc-iNMC-u1FO-4vKX4g', 'scsi-0QEMU_QEMU_HARDDISK_5123065a-17ef-4227-8b29-db8d7701c704', 'scsi-SQEMU_QEMU_HARDDISK_5123065a-17ef-4227-8b29-db8d7701c704'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-13 00:56:48.973201 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e49f76b8-3d49-472e-b9d5-6b475ff66b1a', 'scsi-SQEMU_QEMU_HARDDISK_e49f76b8-3d49-472e-b9d5-6b475ff66b1a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-13 00:56:48.973210 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--119e494c--61db--56d2--84c4--ae65d8356f6a-osd--block--119e494c--61db--56d2--84c4--ae65d8356f6a', 'dm-uuid-LVM-aKLT8JNCOXsBc0C1gwIdNTjLoGLtcq6z5t48Wuu2NVQ4Z0cbe51erZOUcnYreOLk'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-13 00:56:48.973216 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-13-00-03-28-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-13 00:56:48.973220 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5854fe4a--6d96--56a2--8017--73d7ac8736b8-osd--block--5854fe4a--6d96--56a2--8017--73d7ac8736b8', 'dm-uuid-LVM-DanwlmyYXjv3W8jDd7gIIXAnF5dZXwutprgamNuSW6Fu1UsLU31ga3JUkWu8KPCy'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-13 00:56:48.973224 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:56:48.973228 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:56:48.973243 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:56:48.973247 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:56:48.973284 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:56:48.973298 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:56:48.973304 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:56:48.973314 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:56:48.973321 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.973328 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50cdff76-9cd5-47b7-8bb7-718e614446bf', 'scsi-SQEMU_QEMU_HARDDISK_50cdff76-9cd5-47b7-8bb7-718e614446bf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50cdff76-9cd5-47b7-8bb7-718e614446bf-part1', 'scsi-SQEMU_QEMU_HARDDISK_50cdff76-9cd5-47b7-8bb7-718e614446bf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50cdff76-9cd5-47b7-8bb7-718e614446bf-part14', 'scsi-SQEMU_QEMU_HARDDISK_50cdff76-9cd5-47b7-8bb7-718e614446bf-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50cdff76-9cd5-47b7-8bb7-718e614446bf-part15', 'scsi-SQEMU_QEMU_HARDDISK_50cdff76-9cd5-47b7-8bb7-718e614446bf-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50cdff76-9cd5-47b7-8bb7-718e614446bf-part16', 'scsi-SQEMU_QEMU_HARDDISK_50cdff76-9cd5-47b7-8bb7-718e614446bf-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-13 00:56:48.973369 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--119e494c--61db--56d2--84c4--ae65d8356f6a-osd--block--119e494c--61db--56d2--84c4--ae65d8356f6a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-M47k6j-4Htg-8gMw-gFQx-rYEL-zlZr-SG96Cv', 'scsi-0QEMU_QEMU_HARDDISK_173613da-cd5d-4175-9e2e-faf4092bf0a3', 'scsi-SQEMU_QEMU_HARDDISK_173613da-cd5d-4175-9e2e-faf4092bf0a3'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-13 00:56:48.973388 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--5854fe4a--6d96--56a2--8017--73d7ac8736b8-osd--block--5854fe4a--6d96--56a2--8017--73d7ac8736b8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-nOtw0g-XnPm-13J8-zFZd-lk1r-0DqR-r1FckL', 'scsi-0QEMU_QEMU_HARDDISK_dc527160-e7af-4d74-be06-07ea7bd10a9b', 'scsi-SQEMU_QEMU_HARDDISK_dc527160-e7af-4d74-be06-07ea7bd10a9b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-13 00:56:48.973395 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2fefda09-8576-4844-bc1b-e9a7eb3ad8aa', 'scsi-SQEMU_QEMU_HARDDISK_2fefda09-8576-4844-bc1b-e9a7eb3ad8aa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-13 00:56:48.973401 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-13-00-03-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-13 00:56:48.973407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:56:48.973414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:56:48.973421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:56:48.973428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:56:48.973479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:56:48.973486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:56:48.973490 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.973494 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.973498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:56:48.973505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:56:48.973509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:56:48.973512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:56:48.973516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:56:48.973520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:56:48.973524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:56:48.973572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:56:48.973580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:56:48.973587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:56:48.973597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:56:48.973603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:56:48.973609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:56:48.973616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:56:48.973622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:56:48.973672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a0d7d0f-4636-493e-803a-05680bb9c3f6', 'scsi-SQEMU_QEMU_HARDDISK_5a0d7d0f-4636-493e-803a-05680bb9c3f6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a0d7d0f-4636-493e-803a-05680bb9c3f6-part1', 'scsi-SQEMU_QEMU_HARDDISK_5a0d7d0f-4636-493e-803a-05680bb9c3f6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a0d7d0f-4636-493e-803a-05680bb9c3f6-part14', 'scsi-SQEMU_QEMU_HARDDISK_5a0d7d0f-4636-493e-803a-05680bb9c3f6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a0d7d0f-4636-493e-803a-05680bb9c3f6-part15', 'scsi-SQEMU_QEMU_HARDDISK_5a0d7d0f-4636-493e-803a-05680bb9c3f6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a0d7d0f-4636-493e-803a-05680bb9c3f6-part16', 'scsi-SQEMU_QEMU_HARDDISK_5a0d7d0f-4636-493e-803a-05680bb9c3f6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-13 00:56:48.973688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-13-00-03-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-13 00:56:48.973692 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.973699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_424b80b3-bd2d-4fbf-95b2-3708ce35a18a', 'scsi-SQEMU_QEMU_HARDDISK_424b80b3-bd2d-4fbf-95b2-3708ce35a18a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_424b80b3-bd2d-4fbf-95b2-3708ce35a18a-part1', 'scsi-SQEMU_QEMU_HARDDISK_424b80b3-bd2d-4fbf-95b2-3708ce35a18a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_424b80b3-bd2d-4fbf-95b2-3708ce35a18a-part14', 'scsi-SQEMU_QEMU_HARDDISK_424b80b3-bd2d-4fbf-95b2-3708ce35a18a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_424b80b3-bd2d-4fbf-95b2-3708ce35a18a-part15', 'scsi-SQEMU_QEMU_HARDDISK_424b80b3-bd2d-4fbf-95b2-3708ce35a18a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_424b80b3-bd2d-4fbf-95b2-3708ce35a18a-part16', 'scsi-SQEMU_QEMU_HARDDISK_424b80b3-bd2d-4fbf-95b2-3708ce35a18a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-13 00:56:48.973746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:56:48.973754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:56:48.973760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:56:48.973769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9bfdf0a9-a88b-432c-bbdd-eaea61a071f8', 'scsi-SQEMU_QEMU_HARDDISK_9bfdf0a9-a88b-432c-bbdd-eaea61a071f8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9bfdf0a9-a88b-432c-bbdd-eaea61a071f8-part1', 'scsi-SQEMU_QEMU_HARDDISK_9bfdf0a9-a88b-432c-bbdd-eaea61a071f8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9bfdf0a9-a88b-432c-bbdd-eaea61a071f8-part14', 'scsi-SQEMU_QEMU_HARDDISK_9bfdf0a9-a88b-432c-bbdd-eaea61a071f8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9bfdf0a9-a88b-432c-bbdd-eaea61a071f8-part15', 'scsi-SQEMU_QEMU_HARDDISK_9bfdf0a9-a88b-432c-bbdd-eaea61a071f8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9bfdf0a9-a88b-432c-bbdd-eaea61a071f8-part16', 'scsi-SQEMU_QEMU_HARDDISK_9bfdf0a9-a88b-432c-bbdd-eaea61a071f8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-13 00:56:48.973790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-13-00-03-32-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-13 00:56:48.973844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-13-00-03-20-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-13 00:56:48.973852 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.973856 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.973860 | orchestrator | 2026-03-13 00:56:48.973864 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-13 00:56:48.973868 | orchestrator | Friday 13 March 2026 00:46:34 +0000 (0:00:01.427) 0:00:33.011 ********** 2026-03-13 00:56:48.973873 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b5494c86--4b11--53e5--88ab--5da9d8a68a1e-osd--block--b5494c86--4b11--53e5--88ab--5da9d8a68a1e', 'dm-uuid-LVM-LPHl0YzeI6FamkHwpYfPFLYvA4jefdeLB0n60KxVDZol4Rt6ZGCDu50Tpw7xyBAY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.973882 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b7299377--1bbd--5436--9d58--2dd820a08a2f-osd--block--b7299377--1bbd--5436--9d58--2dd820a08a2f', 'dm-uuid-LVM-HoHeTclBr30fca9ZNFZuhsY6pk6aA3QcxtHyPjIk3J5AIumWTBgltxaGzq9CnrMA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.973886 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.973890 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.973898 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.973931 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.973937 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.973943 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.973965 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.973969 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.974005 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_97f75f20-579f-4518-b7ce-4d90969f977d', 'scsi-SQEMU_QEMU_HARDDISK_97f75f20-579f-4518-b7ce-4d90969f977d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_97f75f20-579f-4518-b7ce-4d90969f977d-part1', 'scsi-SQEMU_QEMU_HARDDISK_97f75f20-579f-4518-b7ce-4d90969f977d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_97f75f20-579f-4518-b7ce-4d90969f977d-part14', 'scsi-SQEMU_QEMU_HARDDISK_97f75f20-579f-4518-b7ce-4d90969f977d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_97f75f20-579f-4518-b7ce-4d90969f977d-part15', 'scsi-SQEMU_QEMU_HARDDISK_97f75f20-579f-4518-b7ce-4d90969f977d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_97f75f20-579f-4518-b7ce-4d90969f977d-part16', 'scsi-SQEMU_QEMU_HARDDISK_97f75f20-579f-4518-b7ce-4d90969f977d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.974039 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--b5494c86--4b11--53e5--88ab--5da9d8a68a1e-osd--block--b5494c86--4b11--53e5--88ab--5da9d8a68a1e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-LXpqTL-G1pt-XewF-Zt4p-vrnA-Ynye-ARUN64', 'scsi-0QEMU_QEMU_HARDDISK_0e74aa03-0dd7-4a4d-94fb-6534b2ad29b5', 'scsi-SQEMU_QEMU_HARDDISK_0e74aa03-0dd7-4a4d-94fb-6534b2ad29b5'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.974046 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b7299377--1bbd--5436--9d58--2dd820a08a2f-osd--block--b7299377--1bbd--5436--9d58--2dd820a08a2f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xU3y9w-lno0-fYYl-h6C2-Bafl-jXiW-zSsbBh', 'scsi-0QEMU_QEMU_HARDDISK_e5b6d572-8591-43aa-97f9-3b718c2d248a', 'scsi-SQEMU_QEMU_HARDDISK_e5b6d572-8591-43aa-97f9-3b718c2d248a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.974053 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b47ce045-806c-4f33-b887-31de2316680c', 'scsi-SQEMU_QEMU_HARDDISK_b47ce045-806c-4f33-b887-31de2316680c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.974100 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-13-00-03-11-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.974110 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--49707cb0--36ac--571b--bf56--7288c46886ca-osd--block--49707cb0--36ac--571b--bf56--7288c46886ca', 'dm-uuid-LVM-lUuxAlRxeDpKHFR330Fw0ajQMZxdmGdFcZe0ZY3SvPyxgqFjJLxezDxIRmkhNvve'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.974119 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--119e494c--61db--56d2--84c4--ae65d8356f6a-osd--block--119e494c--61db--56d2--84c4--ae65d8356f6a', 'dm-uuid-LVM-aKLT8JNCOXsBc0C1gwIdNTjLoGLtcq6z5t48Wuu2NVQ4Z0cbe51erZOUcnYreOLk'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.974126 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5854fe4a--6d96--56a2--8017--73d7ac8736b8-osd--block--5854fe4a--6d96--56a2--8017--73d7ac8736b8', 'dm-uuid-LVM-DanwlmyYXjv3W8jDd7gIIXAnF5dZXwutprgamNuSW6Fu1UsLU31ga3JUkWu8KPCy'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.974147 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--798cee0b--732e--51b2--a8a3--29d8c2932297-osd--block--798cee0b--732e--51b2--a8a3--29d8c2932297', 'dm-uuid-LVM-L1OBBH0k0D00ZH0dN8uE5pJTqoWU0KZEfPq0LLMud7Q5AWoDnaD4QV1JonD11yi2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.974194 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.974204 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.974211 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.974217 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.974227 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.974234 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.974255 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.974262 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.974302 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.974312 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.974321 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.974328 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.974340 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.974345 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.974349 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.974387 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.974396 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.974405 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.974410 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50cdff76-9cd5-47b7-8bb7-718e614446bf', 'scsi-SQEMU_QEMU_HARDDISK_50cdff76-9cd5-47b7-8bb7-718e614446bf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50cdff76-9cd5-47b7-8bb7-718e614446bf-part1', 'scsi-SQEMU_QEMU_HARDDISK_50cdff76-9cd5-47b7-8bb7-718e614446bf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50cdff76-9cd5-47b7-8bb7-718e614446bf-part14', 'scsi-SQEMU_QEMU_HARDDISK_50cdff76-9cd5-47b7-8bb7-718e614446bf-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50cdff76-9cd5-47b7-8bb7-718e614446bf-part15', 'scsi-SQEMU_QEMU_HARDDISK_50cdff76-9cd5-47b7-8bb7-718e614446bf-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50cdff76-9cd5-47b7-8bb7-718e614446bf-part16', 'scsi-SQEMU_QEMU_HARDDISK_50cdff76-9cd5-47b7-8bb7-718e614446bf-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.974465 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--119e494c--61db--56d2--84c4--ae65d8356f6a-osd--block--119e494c--61db--56d2--84c4--ae65d8356f6a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-M47k6j-4Htg-8gMw-gFQx-rYEL-zlZr-SG96Cv', 'scsi-0QEMU_QEMU_HARDDISK_173613da-cd5d-4175-9e2e-faf4092bf0a3', 'scsi-SQEMU_QEMU_HARDDISK_173613da-cd5d-4175-9e2e-faf4092bf0a3'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.974479 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.974483 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--5854fe4a--6d96--56a2--8017--73d7ac8736b8-osd--block--5854fe4a--6d96--56a2--8017--73d7ac8736b8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-nOtw0g-XnPm-13J8-zFZd-lk1r-0DqR-r1FckL', 'scsi-0QEMU_QEMU_HARDDISK_dc527160-e7af-4d74-be06-07ea7bd10a9b', 'scsi-SQEMU_QEMU_HARDDISK_dc527160-e7af-4d74-be06-07ea7bd10a9b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.974491 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.974495 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.974530 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.974540 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2fefda09-8576-4844-bc1b-e9a7eb3ad8aa', 'scsi-SQEMU_QEMU_HARDDISK_2fefda09-8576-4844-bc1b-e9a7eb3ad8aa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.974548 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.974556 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.974560 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-13-00-03-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.974597 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b2c3f3a5-d054-4214-843e-d9b33fe0d233', 'scsi-SQEMU_QEMU_HARDDISK_b2c3f3a5-d054-4214-843e-d9b33fe0d233'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b2c3f3a5-d054-4214-843e-d9b33fe0d233-part1', 'scsi-SQEMU_QEMU_HARDDISK_b2c3f3a5-d054-4214-843e-d9b33fe0d233-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b2c3f3a5-d054-4214-843e-d9b33fe0d233-part14', 'scsi-SQEMU_QEMU_HARDDISK_b2c3f3a5-d054-4214-843e-d9b33fe0d233-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b2c3f3a5-d054-4214-843e-d9b33fe0d233-part15', 'scsi-SQEMU_QEMU_HARDDISK_b2c3f3a5-d054-4214-843e-d9b33fe0d233-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b2c3f3a5-d054-4214-843e-d9b33fe0d233-part16', 'scsi-SQEMU_QEMU_HARDDISK_b2c3f3a5-d054-4214-843e-d9b33fe0d233-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.974612 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--49707cb0--36ac--571b--bf56--7288c46886ca-osd--block--49707cb0--36ac--571b--bf56--7288c46886ca'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JdLIXO-MqwE-lr4R-7jAl-Oajp-v9D3-BfnDcq', 'scsi-0QEMU_QEMU_HARDDISK_9a254b57-f2ae-4287-95a0-937fffba734e', 'scsi-SQEMU_QEMU_HARDDISK_9a254b57-f2ae-4287-95a0-937fffba734e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.974641 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.974649 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--798cee0b--732e--51b2--a8a3--29d8c2932297-osd--block--798cee0b--732e--51b2--a8a3--29d8c2932297'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-MNWIhf-nwp0-tXaF-WOrc-iNMC-u1FO-4vKX4g', 'scsi-0QEMU_QEMU_HARDDISK_5123065a-17ef-4227-8b29-db8d7701c704', 'scsi-SQEMU_QEMU_HARDDISK_5123065a-17ef-4227-8b29-db8d7701c704'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.974697 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_424b80b3-bd2d-4fbf-95b2-3708ce35a18a', 'scsi-SQEMU_QEMU_HARDDISK_424b80b3-bd2d-4fbf-95b2-3708ce35a18a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_424b80b3-bd2d-4fbf-95b2-3708ce35a18a-part1', 'scsi-SQEMU_QEMU_HARDDISK_424b80b3-bd2d-4fbf-95b2-3708ce35a18a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_424b80b3-bd2d-4fbf-95b2-3708ce35a18a-part14', 'scsi-SQEMU_QEMU_HARDDISK_424b80b3-bd2d-4fbf-95b2-3708ce35a18a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_424b80b3-bd2d-4fbf-95b2-3708ce35a18a-part15', 'scsi-SQEMU_QEMU_HARDDISK_424b80b3-bd2d-4fbf-95b2-3708ce35a18a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_424b80b3-bd2d-4fbf-95b2-3708ce35a18a-part16', 'scsi-SQEMU_QEMU_HARDDISK_424b80b3-bd2d-4fbf-95b2-3708ce35a18a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.974713 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e49f76b8-3d49-472e-b9d5-6b475ff66b1a', 'scsi-SQEMU_QEMU_HARDDISK_e49f76b8-3d49-472e-b9d5-6b475ff66b1a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.974718 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-13-00-03-20-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.974754 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-13-00-03-28-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.974764 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.974779 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.974787 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.974792 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.974796 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.974817 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.974821 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.974860 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.974870 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.974876 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.974884 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.974891 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9bfdf0a9-a88b-432c-bbdd-eaea61a071f8', 'scsi-SQEMU_QEMU_HARDDISK_9bfdf0a9-a88b-432c-bbdd-eaea61a071f8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9bfdf0a9-a88b-432c-bbdd-eaea61a071f8-part1', 'scsi-SQEMU_QEMU_HARDDISK_9bfdf0a9-a88b-432c-bbdd-eaea61a071f8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9bfdf0a9-a88b-432c-bbdd-eaea61a071f8-part14', 'scsi-SQEMU_QEMU_HARDDISK_9bfdf0a9-a88b-432c-bbdd-eaea61a071f8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9bfdf0a9-a88b-432c-bbdd-eaea61a071f8-part15', 'scsi-SQEMU_QEMU_HARDDISK_9bfdf0a9-a88b-432c-bbdd-eaea61a071f8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9bfdf0a9-a88b-432c-bbdd-eaea61a071f8-part16', 'scsi-SQEMU_QEMU_HARDDISK_9bfdf0a9-a88b-432c-bbdd-eaea61a071f8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.974895 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.974930 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.974939 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-13-00-03-32-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.974963 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.974967 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.974971 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.974975 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.974979 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.975016 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.975026 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.975040 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a0d7d0f-4636-493e-803a-05680bb9c3f6', 'scsi-SQEMU_QEMU_HARDDISK_5a0d7d0f-4636-493e-803a-05680bb9c3f6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a0d7d0f-4636-493e-803a-05680bb9c3f6-part1', 'scsi-SQEMU_QEMU_HARDDISK_5a0d7d0f-4636-493e-803a-05680bb9c3f6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a0d7d0f-4636-493e-803a-05680bb9c3f6-part14', 'scsi-SQEMU_QEMU_HARDDISK_5a0d7d0f-4636-493e-803a-05680bb9c3f6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a0d7d0f-4636-493e-803a-05680bb9c3f6-part15', 'scsi-SQEMU_QEMU_HARDDISK_5a0d7d0f-4636-493e-803a-05680bb9c3f6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a0d7d0f-4636-493e-803a-05680bb9c3f6-part16', 'scsi-SQEMU_QEMU_HARDDISK_5a0d7d0f-4636-493e-803a-05680bb9c3f6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.975045 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-13-00-03-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:56:48.975049 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.975053 | orchestrator | 2026-03-13 00:56:48.975088 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-13 00:56:48.975097 | orchestrator | Friday 13 March 2026 00:46:36 +0000 (0:00:02.267) 0:00:35.278 ********** 2026-03-13 00:56:48.975112 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.975116 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.975120 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.975124 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.975128 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.975131 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.975135 | orchestrator | 2026-03-13 00:56:48.975142 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-13 00:56:48.975146 | orchestrator | Friday 13 March 2026 00:46:37 +0000 (0:00:01.151) 0:00:36.430 ********** 2026-03-13 00:56:48.975150 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.975154 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.975157 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.975161 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.975164 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.975168 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.975172 | orchestrator | 2026-03-13 00:56:48.975175 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-13 00:56:48.975179 | orchestrator | Friday 13 March 2026 00:46:38 +0000 (0:00:00.841) 0:00:37.271 ********** 2026-03-13 00:56:48.975183 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.975187 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.975190 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.975194 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.975198 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.975201 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.975205 | orchestrator | 2026-03-13 00:56:48.975209 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-13 00:56:48.975213 | orchestrator | Friday 13 March 2026 00:46:39 +0000 (0:00:00.998) 0:00:38.270 ********** 2026-03-13 00:56:48.975216 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.975220 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.975223 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.975227 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.975233 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.975237 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.975241 | orchestrator | 2026-03-13 00:56:48.975245 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-13 00:56:48.975251 | orchestrator | Friday 13 March 2026 00:46:41 +0000 (0:00:01.719) 0:00:39.990 ********** 2026-03-13 00:56:48.975264 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.975268 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.975271 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.975275 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.975279 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.975282 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.975286 | orchestrator | 2026-03-13 00:56:48.975290 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-13 00:56:48.975293 | orchestrator | Friday 13 March 2026 00:46:42 +0000 (0:00:01.499) 0:00:41.490 ********** 2026-03-13 00:56:48.975297 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.975301 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.975305 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.975308 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.975312 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.975316 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.975319 | orchestrator | 2026-03-13 00:56:48.975323 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-13 00:56:48.975327 | orchestrator | Friday 13 March 2026 00:46:43 +0000 (0:00:00.881) 0:00:42.371 ********** 2026-03-13 00:56:48.975330 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-13 00:56:48.975334 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-13 00:56:48.975338 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-13 00:56:48.975342 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-13 00:56:48.975345 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-13 00:56:48.975349 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-13 00:56:48.975353 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-13 00:56:48.975356 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-13 00:56:48.975363 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-13 00:56:48.975367 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-13 00:56:48.975371 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-03-13 00:56:48.975375 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-13 00:56:48.975378 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-03-13 00:56:48.975382 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-13 00:56:48.975385 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-13 00:56:48.975389 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-03-13 00:56:48.975393 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-13 00:56:48.975396 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-03-13 00:56:48.975400 | orchestrator | 2026-03-13 00:56:48.975404 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-13 00:56:48.975408 | orchestrator | Friday 13 March 2026 00:46:47 +0000 (0:00:04.254) 0:00:46.625 ********** 2026-03-13 00:56:48.975411 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-13 00:56:48.975415 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-13 00:56:48.975419 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-13 00:56:48.975422 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-13 00:56:48.975426 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-13 00:56:48.975430 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-13 00:56:48.975433 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.975437 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-13 00:56:48.975457 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-13 00:56:48.975462 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-13 00:56:48.975466 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.975469 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-13 00:56:48.975473 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.975477 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-13 00:56:48.975480 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-13 00:56:48.975484 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-13 00:56:48.975488 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-13 00:56:48.975491 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-13 00:56:48.975495 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.975499 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.975503 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-13 00:56:48.975506 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-13 00:56:48.975510 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-13 00:56:48.975514 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.975518 | orchestrator | 2026-03-13 00:56:48.975521 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-13 00:56:48.975525 | orchestrator | Friday 13 March 2026 00:46:49 +0000 (0:00:01.659) 0:00:48.285 ********** 2026-03-13 00:56:48.975529 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.975532 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.975536 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.975540 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-13 00:56:48.975544 | orchestrator | 2026-03-13 00:56:48.975550 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-13 00:56:48.975554 | orchestrator | Friday 13 March 2026 00:46:50 +0000 (0:00:01.228) 0:00:49.513 ********** 2026-03-13 00:56:48.975562 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.975566 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.975569 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.975573 | orchestrator | 2026-03-13 00:56:48.975577 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-13 00:56:48.975580 | orchestrator | Friday 13 March 2026 00:46:50 +0000 (0:00:00.303) 0:00:49.817 ********** 2026-03-13 00:56:48.975584 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.975588 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.975592 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.975595 | orchestrator | 2026-03-13 00:56:48.975599 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-13 00:56:48.975603 | orchestrator | Friday 13 March 2026 00:46:51 +0000 (0:00:00.447) 0:00:50.264 ********** 2026-03-13 00:56:48.975606 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.975610 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.975614 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.975617 | orchestrator | 2026-03-13 00:56:48.975621 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-13 00:56:48.975625 | orchestrator | Friday 13 March 2026 00:46:52 +0000 (0:00:00.736) 0:00:51.001 ********** 2026-03-13 00:56:48.975628 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.975664 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.975668 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.975671 | orchestrator | 2026-03-13 00:56:48.975675 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-13 00:56:48.975679 | orchestrator | Friday 13 March 2026 00:46:52 +0000 (0:00:00.671) 0:00:51.673 ********** 2026-03-13 00:56:48.975682 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-13 00:56:48.975686 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-13 00:56:48.975690 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-13 00:56:48.975693 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.975697 | orchestrator | 2026-03-13 00:56:48.975701 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-13 00:56:48.975705 | orchestrator | Friday 13 March 2026 00:46:53 +0000 (0:00:00.515) 0:00:52.189 ********** 2026-03-13 00:56:48.975708 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-13 00:56:48.975712 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-13 00:56:48.975716 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-13 00:56:48.975720 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.975723 | orchestrator | 2026-03-13 00:56:48.975727 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-13 00:56:48.975731 | orchestrator | Friday 13 March 2026 00:46:54 +0000 (0:00:00.851) 0:00:53.040 ********** 2026-03-13 00:56:48.975734 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-13 00:56:48.975738 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-13 00:56:48.975742 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-13 00:56:48.975745 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.975749 | orchestrator | 2026-03-13 00:56:48.975753 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-13 00:56:48.975756 | orchestrator | Friday 13 March 2026 00:46:54 +0000 (0:00:00.451) 0:00:53.492 ********** 2026-03-13 00:56:48.975760 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.975764 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.975768 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.975771 | orchestrator | 2026-03-13 00:56:48.975775 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-13 00:56:48.975779 | orchestrator | Friday 13 March 2026 00:46:54 +0000 (0:00:00.395) 0:00:53.888 ********** 2026-03-13 00:56:48.975782 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-13 00:56:48.975790 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-13 00:56:48.975808 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-13 00:56:48.975812 | orchestrator | 2026-03-13 00:56:48.975816 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-13 00:56:48.975820 | orchestrator | Friday 13 March 2026 00:46:56 +0000 (0:00:01.298) 0:00:55.186 ********** 2026-03-13 00:56:48.975824 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-13 00:56:48.975827 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-13 00:56:48.975831 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-13 00:56:48.975835 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-13 00:56:48.975838 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-13 00:56:48.975842 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-13 00:56:48.975846 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-13 00:56:48.975849 | orchestrator | 2026-03-13 00:56:48.975853 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-13 00:56:48.975857 | orchestrator | Friday 13 March 2026 00:46:56 +0000 (0:00:00.692) 0:00:55.878 ********** 2026-03-13 00:56:48.975861 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-13 00:56:48.975864 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-13 00:56:48.975868 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-13 00:56:48.975872 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-13 00:56:48.975876 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-13 00:56:48.975881 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-13 00:56:48.975885 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-13 00:56:48.975889 | orchestrator | 2026-03-13 00:56:48.975892 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-13 00:56:48.975896 | orchestrator | Friday 13 March 2026 00:46:58 +0000 (0:00:01.592) 0:00:57.471 ********** 2026-03-13 00:56:48.975900 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:56:48.975905 | orchestrator | 2026-03-13 00:56:48.975909 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-13 00:56:48.975912 | orchestrator | Friday 13 March 2026 00:46:59 +0000 (0:00:00.897) 0:00:58.369 ********** 2026-03-13 00:56:48.975916 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:56:48.975920 | orchestrator | 2026-03-13 00:56:48.975924 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-13 00:56:48.975927 | orchestrator | Friday 13 March 2026 00:47:00 +0000 (0:00:00.820) 0:00:59.189 ********** 2026-03-13 00:56:48.975931 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.975936 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.975942 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.975947 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.975953 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.975959 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.975965 | orchestrator | 2026-03-13 00:56:48.975971 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-13 00:56:48.975976 | orchestrator | Friday 13 March 2026 00:47:01 +0000 (0:00:01.098) 0:01:00.288 ********** 2026-03-13 00:56:48.975983 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.975993 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.976000 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.976004 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.976008 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.976012 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.976016 | orchestrator | 2026-03-13 00:56:48.976019 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-13 00:56:48.976023 | orchestrator | Friday 13 March 2026 00:47:02 +0000 (0:00:00.881) 0:01:01.169 ********** 2026-03-13 00:56:48.976027 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.976030 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.976034 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.976038 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.976041 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.976045 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.976049 | orchestrator | 2026-03-13 00:56:48.976052 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-13 00:56:48.976056 | orchestrator | Friday 13 March 2026 00:47:03 +0000 (0:00:01.075) 0:01:02.244 ********** 2026-03-13 00:56:48.976060 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.976063 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.976067 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.976071 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.976074 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.976078 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.976082 | orchestrator | 2026-03-13 00:56:48.976086 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-13 00:56:48.976089 | orchestrator | Friday 13 March 2026 00:47:04 +0000 (0:00:00.819) 0:01:03.064 ********** 2026-03-13 00:56:48.976093 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.976097 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.976100 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.976104 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.976108 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.976126 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.976131 | orchestrator | 2026-03-13 00:56:48.976134 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-13 00:56:48.976138 | orchestrator | Friday 13 March 2026 00:47:05 +0000 (0:00:01.440) 0:01:04.504 ********** 2026-03-13 00:56:48.976142 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.976146 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.976149 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.976153 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.976157 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.976160 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.976164 | orchestrator | 2026-03-13 00:56:48.976168 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-13 00:56:48.976171 | orchestrator | Friday 13 March 2026 00:47:06 +0000 (0:00:00.733) 0:01:05.238 ********** 2026-03-13 00:56:48.976175 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.976179 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.976182 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.976186 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.976190 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.976193 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.976197 | orchestrator | 2026-03-13 00:56:48.976201 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-13 00:56:48.976204 | orchestrator | Friday 13 March 2026 00:47:07 +0000 (0:00:01.199) 0:01:06.437 ********** 2026-03-13 00:56:48.976208 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.976212 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.976216 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.976219 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.976223 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.976229 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.976233 | orchestrator | 2026-03-13 00:56:48.976237 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-13 00:56:48.976241 | orchestrator | Friday 13 March 2026 00:47:09 +0000 (0:00:02.339) 0:01:08.776 ********** 2026-03-13 00:56:48.976255 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.976258 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.976265 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.976268 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.976272 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.976276 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.976279 | orchestrator | 2026-03-13 00:56:48.976283 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-13 00:56:48.976287 | orchestrator | Friday 13 March 2026 00:47:12 +0000 (0:00:02.360) 0:01:11.136 ********** 2026-03-13 00:56:48.976291 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.976294 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.976298 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.976302 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.976306 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.976309 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.976313 | orchestrator | 2026-03-13 00:56:48.976317 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-13 00:56:48.976320 | orchestrator | Friday 13 March 2026 00:47:12 +0000 (0:00:00.725) 0:01:11.862 ********** 2026-03-13 00:56:48.976324 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.976328 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.976331 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.976335 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.976339 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.976342 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.976346 | orchestrator | 2026-03-13 00:56:48.976350 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-13 00:56:48.976354 | orchestrator | Friday 13 March 2026 00:47:14 +0000 (0:00:01.144) 0:01:13.006 ********** 2026-03-13 00:56:48.976357 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.976361 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.976365 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.976368 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.976372 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.976376 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.976379 | orchestrator | 2026-03-13 00:56:48.976383 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-13 00:56:48.976387 | orchestrator | Friday 13 March 2026 00:47:14 +0000 (0:00:00.757) 0:01:13.763 ********** 2026-03-13 00:56:48.976391 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.976394 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.976398 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.976402 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.976405 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.976409 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.976413 | orchestrator | 2026-03-13 00:56:48.976417 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-13 00:56:48.976423 | orchestrator | Friday 13 March 2026 00:47:15 +0000 (0:00:01.052) 0:01:14.816 ********** 2026-03-13 00:56:48.976429 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.976435 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.976441 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.976447 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.976453 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.976458 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.976465 | orchestrator | 2026-03-13 00:56:48.976471 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-13 00:56:48.976478 | orchestrator | Friday 13 March 2026 00:47:16 +0000 (0:00:00.560) 0:01:15.376 ********** 2026-03-13 00:56:48.976488 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.976494 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.976500 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.976506 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.976512 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.976518 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.976524 | orchestrator | 2026-03-13 00:56:48.976530 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-13 00:56:48.976536 | orchestrator | Friday 13 March 2026 00:47:17 +0000 (0:00:00.713) 0:01:16.090 ********** 2026-03-13 00:56:48.976541 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.976545 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.976549 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.976553 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.976578 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.976586 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.976592 | orchestrator | 2026-03-13 00:56:48.976599 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-13 00:56:48.976605 | orchestrator | Friday 13 March 2026 00:47:17 +0000 (0:00:00.678) 0:01:16.768 ********** 2026-03-13 00:56:48.976612 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.976618 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.976624 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.976628 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.976697 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.976703 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.976707 | orchestrator | 2026-03-13 00:56:48.976711 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-13 00:56:48.976715 | orchestrator | Friday 13 March 2026 00:47:18 +0000 (0:00:00.727) 0:01:17.496 ********** 2026-03-13 00:56:48.976719 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.976723 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.976726 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.976730 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.976734 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.976737 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.976741 | orchestrator | 2026-03-13 00:56:48.976745 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-13 00:56:48.976749 | orchestrator | Friday 13 March 2026 00:47:19 +0000 (0:00:00.643) 0:01:18.140 ********** 2026-03-13 00:56:48.976752 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.976756 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.976760 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.976763 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.976767 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.976771 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.976774 | orchestrator | 2026-03-13 00:56:48.976778 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-13 00:56:48.976782 | orchestrator | Friday 13 March 2026 00:47:20 +0000 (0:00:01.238) 0:01:19.379 ********** 2026-03-13 00:56:48.976785 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:56:48.976795 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:56:48.976799 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:56:48.976802 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:56:48.976806 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:56:48.976810 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:56:48.976814 | orchestrator | 2026-03-13 00:56:48.976817 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-13 00:56:48.976821 | orchestrator | Friday 13 March 2026 00:47:22 +0000 (0:00:02.324) 0:01:21.704 ********** 2026-03-13 00:56:48.976825 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:56:48.976828 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:56:48.976832 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:56:48.976836 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:56:48.976844 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:56:48.976847 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:56:48.976851 | orchestrator | 2026-03-13 00:56:48.976855 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-13 00:56:48.976859 | orchestrator | Friday 13 March 2026 00:47:25 +0000 (0:00:02.350) 0:01:24.055 ********** 2026-03-13 00:56:48.976863 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:56:48.976867 | orchestrator | 2026-03-13 00:56:48.976871 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-13 00:56:48.976875 | orchestrator | Friday 13 March 2026 00:47:26 +0000 (0:00:01.323) 0:01:25.379 ********** 2026-03-13 00:56:48.976878 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.976882 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.976886 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.976890 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.976893 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.976897 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.976900 | orchestrator | 2026-03-13 00:56:48.976904 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-13 00:56:48.976908 | orchestrator | Friday 13 March 2026 00:47:27 +0000 (0:00:00.603) 0:01:25.982 ********** 2026-03-13 00:56:48.976912 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.976915 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.976919 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.976923 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.976927 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.976930 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.976934 | orchestrator | 2026-03-13 00:56:48.976937 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-13 00:56:48.976941 | orchestrator | Friday 13 March 2026 00:47:27 +0000 (0:00:00.778) 0:01:26.760 ********** 2026-03-13 00:56:48.976945 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-13 00:56:48.976949 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-13 00:56:48.976952 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-13 00:56:48.976956 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-13 00:56:48.976960 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-13 00:56:48.976963 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-13 00:56:48.976967 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-13 00:56:48.976971 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-13 00:56:48.976975 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-13 00:56:48.976978 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-13 00:56:48.977010 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-13 00:56:48.977015 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-13 00:56:48.977018 | orchestrator | 2026-03-13 00:56:48.977022 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-13 00:56:48.977026 | orchestrator | Friday 13 March 2026 00:47:29 +0000 (0:00:01.278) 0:01:28.039 ********** 2026-03-13 00:56:48.977029 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:56:48.977033 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:56:48.977037 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:56:48.977040 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:56:48.977047 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:56:48.977051 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:56:48.977054 | orchestrator | 2026-03-13 00:56:48.977058 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-13 00:56:48.977062 | orchestrator | Friday 13 March 2026 00:47:30 +0000 (0:00:01.027) 0:01:29.066 ********** 2026-03-13 00:56:48.977065 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.977069 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.977073 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.977076 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.977080 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.977084 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.977087 | orchestrator | 2026-03-13 00:56:48.977091 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-13 00:56:48.977095 | orchestrator | Friday 13 March 2026 00:47:30 +0000 (0:00:00.586) 0:01:29.653 ********** 2026-03-13 00:56:48.977098 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.977102 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.977106 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.977109 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.977113 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.977116 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.977120 | orchestrator | 2026-03-13 00:56:48.977126 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-13 00:56:48.977130 | orchestrator | Friday 13 March 2026 00:47:31 +0000 (0:00:00.772) 0:01:30.426 ********** 2026-03-13 00:56:48.977133 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.977137 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.977141 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.977144 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.977148 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.977152 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.977155 | orchestrator | 2026-03-13 00:56:48.977159 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-13 00:56:48.977163 | orchestrator | Friday 13 March 2026 00:47:32 +0000 (0:00:00.585) 0:01:31.011 ********** 2026-03-13 00:56:48.977166 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:56:48.977170 | orchestrator | 2026-03-13 00:56:48.977174 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-13 00:56:48.977178 | orchestrator | Friday 13 March 2026 00:47:33 +0000 (0:00:01.039) 0:01:32.051 ********** 2026-03-13 00:56:48.977181 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.977185 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.977189 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.977192 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.977196 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.977200 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.977203 | orchestrator | 2026-03-13 00:56:48.977207 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-13 00:56:48.977211 | orchestrator | Friday 13 March 2026 00:48:20 +0000 (0:00:47.701) 0:02:19.752 ********** 2026-03-13 00:56:48.977214 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-13 00:56:48.977218 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-13 00:56:48.977222 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-13 00:56:48.977225 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.977229 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-13 00:56:48.977233 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-13 00:56:48.977236 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-13 00:56:48.977243 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-13 00:56:48.977246 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-13 00:56:48.977250 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-13 00:56:48.977254 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.977257 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-13 00:56:48.977261 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-13 00:56:48.977265 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-13 00:56:48.977268 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.977272 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-13 00:56:48.977276 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-13 00:56:48.977279 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-13 00:56:48.977283 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.977287 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.977302 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-13 00:56:48.977306 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-13 00:56:48.977310 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-13 00:56:48.977314 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.977318 | orchestrator | 2026-03-13 00:56:48.977321 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-13 00:56:48.977325 | orchestrator | Friday 13 March 2026 00:48:21 +0000 (0:00:00.776) 0:02:20.529 ********** 2026-03-13 00:56:48.977329 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.977332 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.977336 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.977340 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.977343 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.977347 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.977351 | orchestrator | 2026-03-13 00:56:48.977354 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-13 00:56:48.977358 | orchestrator | Friday 13 March 2026 00:48:22 +0000 (0:00:00.705) 0:02:21.234 ********** 2026-03-13 00:56:48.977362 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.977365 | orchestrator | 2026-03-13 00:56:48.977369 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-13 00:56:48.977373 | orchestrator | Friday 13 March 2026 00:48:22 +0000 (0:00:00.126) 0:02:21.361 ********** 2026-03-13 00:56:48.977376 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.977380 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.977384 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.977388 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.977391 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.977395 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.977399 | orchestrator | 2026-03-13 00:56:48.977402 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-13 00:56:48.977406 | orchestrator | Friday 13 March 2026 00:48:22 +0000 (0:00:00.513) 0:02:21.875 ********** 2026-03-13 00:56:48.977412 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.977416 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.977420 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.977423 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.977427 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.977431 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.977434 | orchestrator | 2026-03-13 00:56:48.977438 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-13 00:56:48.977445 | orchestrator | Friday 13 March 2026 00:48:23 +0000 (0:00:00.590) 0:02:22.466 ********** 2026-03-13 00:56:48.977448 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.977452 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.977456 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.977459 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.977463 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.977467 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.977470 | orchestrator | 2026-03-13 00:56:48.977474 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-13 00:56:48.977478 | orchestrator | Friday 13 March 2026 00:48:24 +0000 (0:00:00.549) 0:02:23.015 ********** 2026-03-13 00:56:48.977482 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.977485 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.977489 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.977493 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.977497 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.977500 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.977504 | orchestrator | 2026-03-13 00:56:48.977508 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-13 00:56:48.977512 | orchestrator | Friday 13 March 2026 00:48:26 +0000 (0:00:02.795) 0:02:25.811 ********** 2026-03-13 00:56:48.977515 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.977519 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.977523 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.977526 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.977530 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.977534 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.977537 | orchestrator | 2026-03-13 00:56:48.977541 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-13 00:56:48.977545 | orchestrator | Friday 13 March 2026 00:48:27 +0000 (0:00:00.587) 0:02:26.398 ********** 2026-03-13 00:56:48.977549 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:56:48.977553 | orchestrator | 2026-03-13 00:56:48.977557 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-13 00:56:48.977561 | orchestrator | Friday 13 March 2026 00:48:28 +0000 (0:00:01.088) 0:02:27.487 ********** 2026-03-13 00:56:48.977564 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.977568 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.977572 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.977575 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.977579 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.977583 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.977587 | orchestrator | 2026-03-13 00:56:48.977590 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-13 00:56:48.977594 | orchestrator | Friday 13 March 2026 00:48:29 +0000 (0:00:00.914) 0:02:28.402 ********** 2026-03-13 00:56:48.977598 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.977602 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.977605 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.977609 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.977613 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.977616 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.977620 | orchestrator | 2026-03-13 00:56:48.977624 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-13 00:56:48.977628 | orchestrator | Friday 13 March 2026 00:48:30 +0000 (0:00:00.780) 0:02:29.182 ********** 2026-03-13 00:56:48.977645 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.977651 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.977672 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.977679 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.977685 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.977694 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.977700 | orchestrator | 2026-03-13 00:56:48.977706 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-13 00:56:48.977711 | orchestrator | Friday 13 March 2026 00:48:31 +0000 (0:00:00.851) 0:02:30.034 ********** 2026-03-13 00:56:48.977715 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.977719 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.977723 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.977726 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.977730 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.977734 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.977737 | orchestrator | 2026-03-13 00:56:48.977741 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-13 00:56:48.977745 | orchestrator | Friday 13 March 2026 00:48:31 +0000 (0:00:00.641) 0:02:30.675 ********** 2026-03-13 00:56:48.977748 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.977752 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.977756 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.977759 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.977763 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.977767 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.977770 | orchestrator | 2026-03-13 00:56:48.977774 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-13 00:56:48.977778 | orchestrator | Friday 13 March 2026 00:48:32 +0000 (0:00:00.747) 0:02:31.423 ********** 2026-03-13 00:56:48.977781 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.977785 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.977789 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.977793 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.977797 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.977800 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.977804 | orchestrator | 2026-03-13 00:56:48.977808 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-13 00:56:48.977814 | orchestrator | Friday 13 March 2026 00:48:32 +0000 (0:00:00.428) 0:02:31.851 ********** 2026-03-13 00:56:48.977818 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.977822 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.977825 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.977829 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.977833 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.977836 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.977840 | orchestrator | 2026-03-13 00:56:48.977844 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-13 00:56:48.977847 | orchestrator | Friday 13 March 2026 00:48:33 +0000 (0:00:00.918) 0:02:32.770 ********** 2026-03-13 00:56:48.977851 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.977855 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.977858 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.977862 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.977865 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.977869 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.977873 | orchestrator | 2026-03-13 00:56:48.977876 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-13 00:56:48.977880 | orchestrator | Friday 13 March 2026 00:48:34 +0000 (0:00:00.614) 0:02:33.384 ********** 2026-03-13 00:56:48.977884 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.977888 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.977891 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.977895 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.977899 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.977903 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.977906 | orchestrator | 2026-03-13 00:56:48.977910 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-13 00:56:48.977916 | orchestrator | Friday 13 March 2026 00:48:35 +0000 (0:00:01.281) 0:02:34.666 ********** 2026-03-13 00:56:48.977920 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:56:48.977924 | orchestrator | 2026-03-13 00:56:48.977928 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-13 00:56:48.977931 | orchestrator | Friday 13 March 2026 00:48:36 +0000 (0:00:01.194) 0:02:35.861 ********** 2026-03-13 00:56:48.977935 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-03-13 00:56:48.977939 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-03-13 00:56:48.977942 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-03-13 00:56:48.977946 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-03-13 00:56:48.977950 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-03-13 00:56:48.977954 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-03-13 00:56:48.977957 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-03-13 00:56:48.977961 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-03-13 00:56:48.977965 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-03-13 00:56:48.977968 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-03-13 00:56:48.977972 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-03-13 00:56:48.977976 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-03-13 00:56:48.977979 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-03-13 00:56:48.977983 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-03-13 00:56:48.977987 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-03-13 00:56:48.977990 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-03-13 00:56:48.977994 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-03-13 00:56:48.977998 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-03-13 00:56:48.978034 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-03-13 00:56:48.978040 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-03-13 00:56:48.978044 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-03-13 00:56:48.978047 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-03-13 00:56:48.978051 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-03-13 00:56:48.978055 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-03-13 00:56:48.978059 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-03-13 00:56:48.978062 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-03-13 00:56:48.978066 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-03-13 00:56:48.978070 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-03-13 00:56:48.978073 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-03-13 00:56:48.978077 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-03-13 00:56:48.978081 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-03-13 00:56:48.978084 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-03-13 00:56:48.978088 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-03-13 00:56:48.978092 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-03-13 00:56:48.978096 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-03-13 00:56:48.978099 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-03-13 00:56:48.978103 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-03-13 00:56:48.978107 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-03-13 00:56:48.978113 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-03-13 00:56:48.978127 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-03-13 00:56:48.978130 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-03-13 00:56:48.978134 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-03-13 00:56:48.978138 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-03-13 00:56:48.978142 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-03-13 00:56:48.978145 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-03-13 00:56:48.978149 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-13 00:56:48.978153 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-13 00:56:48.978157 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-03-13 00:56:48.978160 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-13 00:56:48.978164 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-13 00:56:48.978168 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-03-13 00:56:48.978171 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-13 00:56:48.978175 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-13 00:56:48.978179 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-03-13 00:56:48.978182 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-13 00:56:48.978186 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-13 00:56:48.978190 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-13 00:56:48.978194 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-13 00:56:48.978197 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-13 00:56:48.978201 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-13 00:56:48.978205 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-13 00:56:48.978208 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-13 00:56:48.978212 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-13 00:56:48.978216 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-13 00:56:48.978219 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-13 00:56:48.978223 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-13 00:56:48.978227 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-13 00:56:48.978230 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-13 00:56:48.978234 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-13 00:56:48.978238 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-13 00:56:48.978242 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-13 00:56:48.978245 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-13 00:56:48.978249 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-13 00:56:48.978253 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-13 00:56:48.978257 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-13 00:56:48.978260 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-13 00:56:48.978277 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-13 00:56:48.978281 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-13 00:56:48.978291 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-13 00:56:48.978295 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-13 00:56:48.978298 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-13 00:56:48.978302 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-03-13 00:56:48.978306 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-03-13 00:56:48.978309 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-03-13 00:56:48.978313 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-03-13 00:56:48.978317 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-13 00:56:48.978321 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-13 00:56:48.978324 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-03-13 00:56:48.978328 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-03-13 00:56:48.978332 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-03-13 00:56:48.978336 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-03-13 00:56:48.978340 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-03-13 00:56:48.978343 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-13 00:56:48.978347 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-03-13 00:56:48.978351 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-03-13 00:56:48.978354 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-03-13 00:56:48.978358 | orchestrator | 2026-03-13 00:56:48.978364 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-13 00:56:48.978368 | orchestrator | Friday 13 March 2026 00:48:44 +0000 (0:00:07.946) 0:02:43.807 ********** 2026-03-13 00:56:48.978372 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.978375 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.978379 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.978383 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4, testbed-node-3, testbed-node-5 2026-03-13 00:56:48.978387 | orchestrator | 2026-03-13 00:56:48.978391 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-13 00:56:48.978394 | orchestrator | Friday 13 March 2026 00:48:45 +0000 (0:00:00.944) 0:02:44.752 ********** 2026-03-13 00:56:48.978398 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-13 00:56:48.978403 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-13 00:56:48.978407 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-13 00:56:48.978410 | orchestrator | 2026-03-13 00:56:48.978414 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-13 00:56:48.978418 | orchestrator | Friday 13 March 2026 00:48:46 +0000 (0:00:01.027) 0:02:45.779 ********** 2026-03-13 00:56:48.978421 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-13 00:56:48.978427 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-13 00:56:48.978434 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-13 00:56:48.978443 | orchestrator | 2026-03-13 00:56:48.978452 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-13 00:56:48.978458 | orchestrator | Friday 13 March 2026 00:48:48 +0000 (0:00:01.648) 0:02:47.428 ********** 2026-03-13 00:56:48.978468 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.978475 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.978481 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.978487 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.978494 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.978501 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.978507 | orchestrator | 2026-03-13 00:56:48.978513 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-13 00:56:48.978519 | orchestrator | Friday 13 March 2026 00:48:49 +0000 (0:00:00.744) 0:02:48.172 ********** 2026-03-13 00:56:48.978526 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.978532 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.978538 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.978544 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.978551 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.978557 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.978563 | orchestrator | 2026-03-13 00:56:48.978569 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-13 00:56:48.978575 | orchestrator | Friday 13 March 2026 00:48:50 +0000 (0:00:01.077) 0:02:49.250 ********** 2026-03-13 00:56:48.978581 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.978588 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.978594 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.978600 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.978606 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.978613 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.978619 | orchestrator | 2026-03-13 00:56:48.978774 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-13 00:56:48.978789 | orchestrator | Friday 13 March 2026 00:48:51 +0000 (0:00:01.007) 0:02:50.257 ********** 2026-03-13 00:56:48.978793 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.978797 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.978801 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.978805 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.978808 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.978812 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.978816 | orchestrator | 2026-03-13 00:56:48.978819 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-13 00:56:48.978823 | orchestrator | Friday 13 March 2026 00:48:52 +0000 (0:00:01.145) 0:02:51.402 ********** 2026-03-13 00:56:48.978827 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.978830 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.978834 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.978838 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.978841 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.978845 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.978849 | orchestrator | 2026-03-13 00:56:48.978852 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-13 00:56:48.978856 | orchestrator | Friday 13 March 2026 00:48:53 +0000 (0:00:00.797) 0:02:52.200 ********** 2026-03-13 00:56:48.978860 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.978863 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.978867 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.978871 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.978874 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.978878 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.978882 | orchestrator | 2026-03-13 00:56:48.978885 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-13 00:56:48.978889 | orchestrator | Friday 13 March 2026 00:48:54 +0000 (0:00:01.056) 0:02:53.256 ********** 2026-03-13 00:56:48.978893 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.978903 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.978911 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.978915 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.978919 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.978922 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.978926 | orchestrator | 2026-03-13 00:56:48.978930 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-13 00:56:48.978933 | orchestrator | Friday 13 March 2026 00:48:55 +0000 (0:00:00.744) 0:02:54.001 ********** 2026-03-13 00:56:48.978937 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.978941 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.978944 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.978957 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.978963 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.978978 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.978988 | orchestrator | 2026-03-13 00:56:48.978993 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-13 00:56:48.978998 | orchestrator | Friday 13 March 2026 00:48:56 +0000 (0:00:01.572) 0:02:55.573 ********** 2026-03-13 00:56:48.979004 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.979010 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.979015 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.979021 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.979027 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.979032 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.979038 | orchestrator | 2026-03-13 00:56:48.979043 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-13 00:56:48.979048 | orchestrator | Friday 13 March 2026 00:48:59 +0000 (0:00:02.940) 0:02:58.513 ********** 2026-03-13 00:56:48.979053 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.979059 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.979064 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.979069 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.979074 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.979080 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.979086 | orchestrator | 2026-03-13 00:56:48.979092 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-13 00:56:48.979098 | orchestrator | Friday 13 March 2026 00:49:00 +0000 (0:00:00.875) 0:02:59.389 ********** 2026-03-13 00:56:48.979104 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.979110 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.979115 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.979120 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.979126 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.979132 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.979138 | orchestrator | 2026-03-13 00:56:48.979143 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-13 00:56:48.979159 | orchestrator | Friday 13 March 2026 00:49:01 +0000 (0:00:00.616) 0:03:00.005 ********** 2026-03-13 00:56:48.979166 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.979172 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.979177 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.979183 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.979189 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.979195 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.979202 | orchestrator | 2026-03-13 00:56:48.979206 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-13 00:56:48.979210 | orchestrator | Friday 13 March 2026 00:49:01 +0000 (0:00:00.842) 0:03:00.848 ********** 2026-03-13 00:56:48.979214 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-13 00:56:48.979218 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-13 00:56:48.979226 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-13 00:56:48.979230 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.979253 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.979258 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.979262 | orchestrator | 2026-03-13 00:56:48.979265 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-13 00:56:48.979269 | orchestrator | Friday 13 March 2026 00:49:02 +0000 (0:00:00.727) 0:03:01.576 ********** 2026-03-13 00:56:48.979274 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-03-13 00:56:48.979279 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-03-13 00:56:48.979284 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.979288 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-03-13 00:56:48.979295 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-03-13 00:56:48.979299 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.979302 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-03-13 00:56:48.979306 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-03-13 00:56:48.979310 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.979314 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.979317 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.979321 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.979325 | orchestrator | 2026-03-13 00:56:48.979328 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-13 00:56:48.979332 | orchestrator | Friday 13 March 2026 00:49:03 +0000 (0:00:00.754) 0:03:02.331 ********** 2026-03-13 00:56:48.979336 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.979339 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.979343 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.979347 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.979350 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.979354 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.979358 | orchestrator | 2026-03-13 00:56:48.979361 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-13 00:56:48.979365 | orchestrator | Friday 13 March 2026 00:49:03 +0000 (0:00:00.555) 0:03:02.886 ********** 2026-03-13 00:56:48.979369 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.979372 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.979379 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.979383 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.979386 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.979390 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.979393 | orchestrator | 2026-03-13 00:56:48.979397 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-13 00:56:48.979401 | orchestrator | Friday 13 March 2026 00:49:04 +0000 (0:00:00.774) 0:03:03.660 ********** 2026-03-13 00:56:48.979405 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.979408 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.979412 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.979415 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.979419 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.979423 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.979426 | orchestrator | 2026-03-13 00:56:48.979461 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-13 00:56:48.979465 | orchestrator | Friday 13 March 2026 00:49:05 +0000 (0:00:00.714) 0:03:04.375 ********** 2026-03-13 00:56:48.979468 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.979472 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.979476 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.979479 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.979483 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.979487 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.979490 | orchestrator | 2026-03-13 00:56:48.979495 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-13 00:56:48.979512 | orchestrator | Friday 13 March 2026 00:49:06 +0000 (0:00:00.932) 0:03:05.307 ********** 2026-03-13 00:56:48.979517 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.979520 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.979541 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.979545 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.979549 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.979552 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.979556 | orchestrator | 2026-03-13 00:56:48.979560 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-13 00:56:48.979564 | orchestrator | Friday 13 March 2026 00:49:07 +0000 (0:00:00.690) 0:03:05.997 ********** 2026-03-13 00:56:48.979567 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.979571 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.979575 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.979579 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.979585 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.979591 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.979595 | orchestrator | 2026-03-13 00:56:48.979599 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-13 00:56:48.979603 | orchestrator | Friday 13 March 2026 00:49:08 +0000 (0:00:01.088) 0:03:07.086 ********** 2026-03-13 00:56:48.979607 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-13 00:56:48.979611 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-13 00:56:48.979614 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-13 00:56:48.979618 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.979622 | orchestrator | 2026-03-13 00:56:48.979626 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-13 00:56:48.979641 | orchestrator | Friday 13 March 2026 00:49:08 +0000 (0:00:00.552) 0:03:07.639 ********** 2026-03-13 00:56:48.979648 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-13 00:56:48.979654 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-13 00:56:48.979664 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-13 00:56:48.979670 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.979680 | orchestrator | 2026-03-13 00:56:48.979684 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-13 00:56:48.979688 | orchestrator | Friday 13 March 2026 00:49:09 +0000 (0:00:00.363) 0:03:08.002 ********** 2026-03-13 00:56:48.979691 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-13 00:56:48.979697 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-13 00:56:48.979703 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-13 00:56:48.979708 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.979714 | orchestrator | 2026-03-13 00:56:48.979719 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-13 00:56:48.979724 | orchestrator | Friday 13 March 2026 00:49:09 +0000 (0:00:00.333) 0:03:08.336 ********** 2026-03-13 00:56:48.979730 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.979735 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.979741 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.979747 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.979752 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.979757 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.979763 | orchestrator | 2026-03-13 00:56:48.979768 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-13 00:56:48.979775 | orchestrator | Friday 13 March 2026 00:49:09 +0000 (0:00:00.519) 0:03:08.855 ********** 2026-03-13 00:56:48.979781 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-03-13 00:56:48.979787 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-13 00:56:48.979794 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-13 00:56:48.979800 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-13 00:56:48.979805 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.979809 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-03-13 00:56:48.979813 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.979816 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-03-13 00:56:48.979820 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.979824 | orchestrator | 2026-03-13 00:56:48.979827 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-13 00:56:48.979831 | orchestrator | Friday 13 March 2026 00:49:12 +0000 (0:00:02.599) 0:03:11.455 ********** 2026-03-13 00:56:48.979835 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:56:48.979838 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:56:48.979842 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:56:48.979846 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:56:48.979849 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:56:48.979853 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:56:48.979856 | orchestrator | 2026-03-13 00:56:48.979860 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-13 00:56:48.979864 | orchestrator | Friday 13 March 2026 00:49:16 +0000 (0:00:03.676) 0:03:15.131 ********** 2026-03-13 00:56:48.979868 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:56:48.979871 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:56:48.979875 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:56:48.979878 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:56:48.979882 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:56:48.979886 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:56:48.979889 | orchestrator | 2026-03-13 00:56:48.979893 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-13 00:56:48.979897 | orchestrator | Friday 13 March 2026 00:49:17 +0000 (0:00:01.184) 0:03:16.316 ********** 2026-03-13 00:56:48.979900 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.979904 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.979909 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.979916 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:56:48.979925 | orchestrator | 2026-03-13 00:56:48.979934 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-13 00:56:48.979965 | orchestrator | Friday 13 March 2026 00:49:18 +0000 (0:00:01.131) 0:03:17.448 ********** 2026-03-13 00:56:48.979972 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.979978 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.979984 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.979990 | orchestrator | 2026-03-13 00:56:48.979996 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-13 00:56:48.980003 | orchestrator | Friday 13 March 2026 00:49:18 +0000 (0:00:00.346) 0:03:17.794 ********** 2026-03-13 00:56:48.980009 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:56:48.980015 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:56:48.980020 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:56:48.980024 | orchestrator | 2026-03-13 00:56:48.980028 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-13 00:56:48.980032 | orchestrator | Friday 13 March 2026 00:49:20 +0000 (0:00:01.493) 0:03:19.288 ********** 2026-03-13 00:56:48.980035 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-13 00:56:48.980039 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-13 00:56:48.980043 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-13 00:56:48.980046 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.980052 | orchestrator | 2026-03-13 00:56:48.980058 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-13 00:56:48.980065 | orchestrator | Friday 13 March 2026 00:49:20 +0000 (0:00:00.576) 0:03:19.865 ********** 2026-03-13 00:56:48.980074 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.980082 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.980087 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.980092 | orchestrator | 2026-03-13 00:56:48.980098 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-13 00:56:48.980104 | orchestrator | Friday 13 March 2026 00:49:21 +0000 (0:00:00.284) 0:03:20.149 ********** 2026-03-13 00:56:48.980109 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.980115 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.980120 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.980130 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-13 00:56:48.980136 | orchestrator | 2026-03-13 00:56:48.980142 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-13 00:56:48.980147 | orchestrator | Friday 13 March 2026 00:49:22 +0000 (0:00:01.084) 0:03:21.234 ********** 2026-03-13 00:56:48.980153 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-13 00:56:48.980159 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-13 00:56:48.980165 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-13 00:56:48.980171 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.980178 | orchestrator | 2026-03-13 00:56:48.980184 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-13 00:56:48.980190 | orchestrator | Friday 13 March 2026 00:49:22 +0000 (0:00:00.365) 0:03:21.599 ********** 2026-03-13 00:56:48.980196 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.980200 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.980204 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.980209 | orchestrator | 2026-03-13 00:56:48.980215 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-13 00:56:48.980222 | orchestrator | Friday 13 March 2026 00:49:22 +0000 (0:00:00.302) 0:03:21.901 ********** 2026-03-13 00:56:48.980230 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.980236 | orchestrator | 2026-03-13 00:56:48.980242 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-13 00:56:48.980249 | orchestrator | Friday 13 March 2026 00:49:23 +0000 (0:00:00.194) 0:03:22.096 ********** 2026-03-13 00:56:48.980260 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.980265 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.980271 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.980276 | orchestrator | 2026-03-13 00:56:48.980282 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-13 00:56:48.980287 | orchestrator | Friday 13 March 2026 00:49:23 +0000 (0:00:00.340) 0:03:22.436 ********** 2026-03-13 00:56:48.980293 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.980300 | orchestrator | 2026-03-13 00:56:48.980306 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-13 00:56:48.980312 | orchestrator | Friday 13 March 2026 00:49:23 +0000 (0:00:00.187) 0:03:22.623 ********** 2026-03-13 00:56:48.980319 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.980326 | orchestrator | 2026-03-13 00:56:48.980332 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-13 00:56:48.980338 | orchestrator | Friday 13 March 2026 00:49:23 +0000 (0:00:00.213) 0:03:22.837 ********** 2026-03-13 00:56:48.980345 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.980351 | orchestrator | 2026-03-13 00:56:48.980357 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-13 00:56:48.980363 | orchestrator | Friday 13 March 2026 00:49:24 +0000 (0:00:00.129) 0:03:22.966 ********** 2026-03-13 00:56:48.980367 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.980371 | orchestrator | 2026-03-13 00:56:48.980374 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-13 00:56:48.980378 | orchestrator | Friday 13 March 2026 00:49:24 +0000 (0:00:00.578) 0:03:23.545 ********** 2026-03-13 00:56:48.980382 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.980385 | orchestrator | 2026-03-13 00:56:48.980389 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-13 00:56:48.980393 | orchestrator | Friday 13 March 2026 00:49:24 +0000 (0:00:00.206) 0:03:23.752 ********** 2026-03-13 00:56:48.980396 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-13 00:56:48.980400 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-13 00:56:48.980404 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-13 00:56:48.980408 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.980411 | orchestrator | 2026-03-13 00:56:48.980415 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-13 00:56:48.980440 | orchestrator | Friday 13 March 2026 00:49:25 +0000 (0:00:00.390) 0:03:24.142 ********** 2026-03-13 00:56:48.980444 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.980448 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.980452 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.980455 | orchestrator | 2026-03-13 00:56:48.980459 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-13 00:56:48.980463 | orchestrator | Friday 13 March 2026 00:49:25 +0000 (0:00:00.396) 0:03:24.539 ********** 2026-03-13 00:56:48.980467 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.980470 | orchestrator | 2026-03-13 00:56:48.980474 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-13 00:56:48.980478 | orchestrator | Friday 13 March 2026 00:49:25 +0000 (0:00:00.303) 0:03:24.843 ********** 2026-03-13 00:56:48.980481 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.980485 | orchestrator | 2026-03-13 00:56:48.980489 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-13 00:56:48.980492 | orchestrator | Friday 13 March 2026 00:49:26 +0000 (0:00:00.219) 0:03:25.062 ********** 2026-03-13 00:56:48.980496 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.980500 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.980503 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.980508 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-13 00:56:48.980515 | orchestrator | 2026-03-13 00:56:48.980519 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-13 00:56:48.980523 | orchestrator | Friday 13 March 2026 00:49:27 +0000 (0:00:01.251) 0:03:26.313 ********** 2026-03-13 00:56:48.980526 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.980530 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.980534 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.980537 | orchestrator | 2026-03-13 00:56:48.980541 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-13 00:56:48.980545 | orchestrator | Friday 13 March 2026 00:49:27 +0000 (0:00:00.391) 0:03:26.704 ********** 2026-03-13 00:56:48.980552 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:56:48.980556 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:56:48.980561 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:56:48.980567 | orchestrator | 2026-03-13 00:56:48.980573 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-13 00:56:48.980578 | orchestrator | Friday 13 March 2026 00:49:28 +0000 (0:00:01.198) 0:03:27.902 ********** 2026-03-13 00:56:48.980585 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-13 00:56:48.980591 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-13 00:56:48.980597 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-13 00:56:48.980603 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.980610 | orchestrator | 2026-03-13 00:56:48.980616 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-13 00:56:48.980621 | orchestrator | Friday 13 March 2026 00:49:29 +0000 (0:00:00.726) 0:03:28.629 ********** 2026-03-13 00:56:48.980626 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.980652 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.980658 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.980664 | orchestrator | 2026-03-13 00:56:48.980670 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-13 00:56:48.980675 | orchestrator | Friday 13 March 2026 00:49:30 +0000 (0:00:00.465) 0:03:29.095 ********** 2026-03-13 00:56:48.980682 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.980688 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.980695 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.980701 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-13 00:56:48.980707 | orchestrator | 2026-03-13 00:56:48.980712 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-13 00:56:48.980718 | orchestrator | Friday 13 March 2026 00:49:30 +0000 (0:00:00.734) 0:03:29.829 ********** 2026-03-13 00:56:48.980725 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.980731 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.980740 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.980748 | orchestrator | 2026-03-13 00:56:48.980754 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-13 00:56:48.980760 | orchestrator | Friday 13 March 2026 00:49:31 +0000 (0:00:00.405) 0:03:30.235 ********** 2026-03-13 00:56:48.980765 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:56:48.980771 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:56:48.980777 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:56:48.980784 | orchestrator | 2026-03-13 00:56:48.980790 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-13 00:56:48.980797 | orchestrator | Friday 13 March 2026 00:49:32 +0000 (0:00:01.150) 0:03:31.386 ********** 2026-03-13 00:56:48.980803 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-13 00:56:48.980849 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-13 00:56:48.980855 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-13 00:56:48.980862 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.980869 | orchestrator | 2026-03-13 00:56:48.980875 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-13 00:56:48.980889 | orchestrator | Friday 13 March 2026 00:49:32 +0000 (0:00:00.523) 0:03:31.909 ********** 2026-03-13 00:56:48.980895 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.980901 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.980907 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.980913 | orchestrator | 2026-03-13 00:56:48.980919 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-03-13 00:56:48.980926 | orchestrator | Friday 13 March 2026 00:49:33 +0000 (0:00:00.316) 0:03:32.226 ********** 2026-03-13 00:56:48.980932 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.980938 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.980944 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.980951 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.980957 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.980993 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.981002 | orchestrator | 2026-03-13 00:56:48.981008 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-13 00:56:48.981015 | orchestrator | Friday 13 March 2026 00:49:33 +0000 (0:00:00.667) 0:03:32.893 ********** 2026-03-13 00:56:48.981021 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.981027 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.981033 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.981040 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:56:48.981046 | orchestrator | 2026-03-13 00:56:48.981052 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-13 00:56:48.981058 | orchestrator | Friday 13 March 2026 00:49:34 +0000 (0:00:00.721) 0:03:33.614 ********** 2026-03-13 00:56:48.981064 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.981070 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.981076 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.981082 | orchestrator | 2026-03-13 00:56:48.981088 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-13 00:56:48.981094 | orchestrator | Friday 13 March 2026 00:49:35 +0000 (0:00:00.528) 0:03:34.143 ********** 2026-03-13 00:56:48.981100 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:56:48.981106 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:56:48.981113 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:56:48.981118 | orchestrator | 2026-03-13 00:56:48.981125 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-13 00:56:48.981131 | orchestrator | Friday 13 March 2026 00:49:36 +0000 (0:00:01.241) 0:03:35.384 ********** 2026-03-13 00:56:48.981137 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-13 00:56:48.981143 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-13 00:56:48.981149 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-13 00:56:48.981156 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.981162 | orchestrator | 2026-03-13 00:56:48.981173 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-13 00:56:48.981179 | orchestrator | Friday 13 March 2026 00:49:37 +0000 (0:00:00.586) 0:03:35.971 ********** 2026-03-13 00:56:48.981184 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.981190 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.981196 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.981202 | orchestrator | 2026-03-13 00:56:48.981208 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-03-13 00:56:48.981213 | orchestrator | 2026-03-13 00:56:48.981220 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-13 00:56:48.981225 | orchestrator | Friday 13 March 2026 00:49:37 +0000 (0:00:00.597) 0:03:36.569 ********** 2026-03-13 00:56:48.981231 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:56:48.981238 | orchestrator | 2026-03-13 00:56:48.981244 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-13 00:56:48.981255 | orchestrator | Friday 13 March 2026 00:49:38 +0000 (0:00:00.797) 0:03:37.366 ********** 2026-03-13 00:56:48.981261 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:56:48.981267 | orchestrator | 2026-03-13 00:56:48.981273 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-13 00:56:48.981279 | orchestrator | Friday 13 March 2026 00:49:38 +0000 (0:00:00.478) 0:03:37.844 ********** 2026-03-13 00:56:48.981285 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.981292 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.981298 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.981305 | orchestrator | 2026-03-13 00:56:48.981312 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-13 00:56:48.981319 | orchestrator | Friday 13 March 2026 00:49:39 +0000 (0:00:00.905) 0:03:38.750 ********** 2026-03-13 00:56:48.981325 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.981332 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.981339 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.981346 | orchestrator | 2026-03-13 00:56:48.981353 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-13 00:56:48.981358 | orchestrator | Friday 13 March 2026 00:49:40 +0000 (0:00:00.350) 0:03:39.100 ********** 2026-03-13 00:56:48.981364 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.981370 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.981376 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.981382 | orchestrator | 2026-03-13 00:56:48.981388 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-13 00:56:48.981395 | orchestrator | Friday 13 March 2026 00:49:40 +0000 (0:00:00.306) 0:03:39.407 ********** 2026-03-13 00:56:48.981402 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.981409 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.981415 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.981421 | orchestrator | 2026-03-13 00:56:48.981427 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-13 00:56:48.981433 | orchestrator | Friday 13 March 2026 00:49:40 +0000 (0:00:00.296) 0:03:39.703 ********** 2026-03-13 00:56:48.981439 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.981445 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.981451 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.981457 | orchestrator | 2026-03-13 00:56:48.981463 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-13 00:56:48.981469 | orchestrator | Friday 13 March 2026 00:49:41 +0000 (0:00:01.020) 0:03:40.724 ********** 2026-03-13 00:56:48.981475 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.981481 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.981487 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.981494 | orchestrator | 2026-03-13 00:56:48.981500 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-13 00:56:48.981506 | orchestrator | Friday 13 March 2026 00:49:42 +0000 (0:00:00.325) 0:03:41.049 ********** 2026-03-13 00:56:48.981540 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.981547 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.981553 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.981559 | orchestrator | 2026-03-13 00:56:48.981566 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-13 00:56:48.981572 | orchestrator | Friday 13 March 2026 00:49:42 +0000 (0:00:00.303) 0:03:41.353 ********** 2026-03-13 00:56:48.981578 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.981584 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.981590 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.981596 | orchestrator | 2026-03-13 00:56:48.981603 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-13 00:56:48.981609 | orchestrator | Friday 13 March 2026 00:49:43 +0000 (0:00:00.675) 0:03:42.028 ********** 2026-03-13 00:56:48.981625 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.981740 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.981746 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.981750 | orchestrator | 2026-03-13 00:56:48.981754 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-13 00:56:48.981758 | orchestrator | Friday 13 March 2026 00:49:44 +0000 (0:00:00.932) 0:03:42.961 ********** 2026-03-13 00:56:48.981762 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.981765 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.981769 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.981773 | orchestrator | 2026-03-13 00:56:48.981776 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-13 00:56:48.981780 | orchestrator | Friday 13 March 2026 00:49:44 +0000 (0:00:00.316) 0:03:43.277 ********** 2026-03-13 00:56:48.981784 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.981787 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.981791 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.981795 | orchestrator | 2026-03-13 00:56:48.981799 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-13 00:56:48.981802 | orchestrator | Friday 13 March 2026 00:49:44 +0000 (0:00:00.320) 0:03:43.598 ********** 2026-03-13 00:56:48.981806 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.981815 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.981821 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.981827 | orchestrator | 2026-03-13 00:56:48.981833 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-13 00:56:48.981839 | orchestrator | Friday 13 March 2026 00:49:44 +0000 (0:00:00.297) 0:03:43.895 ********** 2026-03-13 00:56:48.981845 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.981851 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.981857 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.981862 | orchestrator | 2026-03-13 00:56:48.981868 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-13 00:56:48.981874 | orchestrator | Friday 13 March 2026 00:49:45 +0000 (0:00:00.690) 0:03:44.586 ********** 2026-03-13 00:56:48.981880 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.981886 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.981892 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.981897 | orchestrator | 2026-03-13 00:56:48.981903 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-13 00:56:48.981908 | orchestrator | Friday 13 March 2026 00:49:45 +0000 (0:00:00.357) 0:03:44.943 ********** 2026-03-13 00:56:48.981914 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.981920 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.981926 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.981932 | orchestrator | 2026-03-13 00:56:48.981939 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-13 00:56:48.981945 | orchestrator | Friday 13 March 2026 00:49:46 +0000 (0:00:00.345) 0:03:45.289 ********** 2026-03-13 00:56:48.981951 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.981958 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.981964 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.981971 | orchestrator | 2026-03-13 00:56:48.981977 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-13 00:56:48.981984 | orchestrator | Friday 13 March 2026 00:49:46 +0000 (0:00:00.333) 0:03:45.623 ********** 2026-03-13 00:56:48.981990 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.981997 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.982003 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.982009 | orchestrator | 2026-03-13 00:56:48.982046 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-13 00:56:48.982053 | orchestrator | Friday 13 March 2026 00:49:46 +0000 (0:00:00.310) 0:03:45.934 ********** 2026-03-13 00:56:48.982059 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.982072 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.982079 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.982085 | orchestrator | 2026-03-13 00:56:48.982092 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-13 00:56:48.982098 | orchestrator | Friday 13 March 2026 00:49:47 +0000 (0:00:00.547) 0:03:46.481 ********** 2026-03-13 00:56:48.982104 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.982111 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.982117 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.982123 | orchestrator | 2026-03-13 00:56:48.982130 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-03-13 00:56:48.982136 | orchestrator | Friday 13 March 2026 00:49:48 +0000 (0:00:00.586) 0:03:47.068 ********** 2026-03-13 00:56:48.982143 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.982149 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.982156 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.982162 | orchestrator | 2026-03-13 00:56:48.982169 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-03-13 00:56:48.982175 | orchestrator | Friday 13 March 2026 00:49:48 +0000 (0:00:00.337) 0:03:47.406 ********** 2026-03-13 00:56:48.982182 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:56:48.982188 | orchestrator | 2026-03-13 00:56:48.982195 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-03-13 00:56:48.982201 | orchestrator | Friday 13 March 2026 00:49:49 +0000 (0:00:00.828) 0:03:48.234 ********** 2026-03-13 00:56:48.982208 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.982214 | orchestrator | 2026-03-13 00:56:48.982266 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-03-13 00:56:48.982276 | orchestrator | Friday 13 March 2026 00:49:49 +0000 (0:00:00.169) 0:03:48.403 ********** 2026-03-13 00:56:48.982282 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-13 00:56:48.982288 | orchestrator | 2026-03-13 00:56:48.982294 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-03-13 00:56:48.982301 | orchestrator | Friday 13 March 2026 00:49:50 +0000 (0:00:01.053) 0:03:49.457 ********** 2026-03-13 00:56:48.982307 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.982313 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.982320 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.982325 | orchestrator | 2026-03-13 00:56:48.982331 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-03-13 00:56:48.982338 | orchestrator | Friday 13 March 2026 00:49:50 +0000 (0:00:00.277) 0:03:49.734 ********** 2026-03-13 00:56:48.982344 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.982351 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.982358 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.982363 | orchestrator | 2026-03-13 00:56:48.982369 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-03-13 00:56:48.982375 | orchestrator | Friday 13 March 2026 00:49:51 +0000 (0:00:00.493) 0:03:50.228 ********** 2026-03-13 00:56:48.982381 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:56:48.982388 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:56:48.982394 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:56:48.982400 | orchestrator | 2026-03-13 00:56:48.982406 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-03-13 00:56:48.982411 | orchestrator | Friday 13 March 2026 00:49:52 +0000 (0:00:01.166) 0:03:51.394 ********** 2026-03-13 00:56:48.982418 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:56:48.982423 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:56:48.982429 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:56:48.982435 | orchestrator | 2026-03-13 00:56:48.982441 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-03-13 00:56:48.982452 | orchestrator | Friday 13 March 2026 00:49:53 +0000 (0:00:00.736) 0:03:52.131 ********** 2026-03-13 00:56:48.982459 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:56:48.982470 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:56:48.982477 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:56:48.982483 | orchestrator | 2026-03-13 00:56:48.982490 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-03-13 00:56:48.982497 | orchestrator | Friday 13 March 2026 00:49:53 +0000 (0:00:00.641) 0:03:52.772 ********** 2026-03-13 00:56:48.982504 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.982510 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.982516 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.982523 | orchestrator | 2026-03-13 00:56:48.982530 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-03-13 00:56:48.982537 | orchestrator | Friday 13 March 2026 00:49:54 +0000 (0:00:00.667) 0:03:53.439 ********** 2026-03-13 00:56:48.982544 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:56:48.982550 | orchestrator | 2026-03-13 00:56:48.982556 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-03-13 00:56:48.982563 | orchestrator | Friday 13 March 2026 00:49:56 +0000 (0:00:02.357) 0:03:55.797 ********** 2026-03-13 00:56:48.982570 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.982577 | orchestrator | 2026-03-13 00:56:48.982584 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-03-13 00:56:48.982590 | orchestrator | Friday 13 March 2026 00:49:57 +0000 (0:00:00.871) 0:03:56.668 ********** 2026-03-13 00:56:48.982596 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-13 00:56:48.982602 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-13 00:56:48.982608 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-13 00:56:48.982615 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-13 00:56:48.982621 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-13 00:56:48.982627 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-03-13 00:56:48.982645 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-13 00:56:48.982652 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-03-13 00:56:48.982658 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-13 00:56:48.982664 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-03-13 00:56:48.982670 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-03-13 00:56:48.982676 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-03-13 00:56:48.982683 | orchestrator | 2026-03-13 00:56:48.982689 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-03-13 00:56:48.982695 | orchestrator | Friday 13 March 2026 00:50:00 +0000 (0:00:03.075) 0:03:59.743 ********** 2026-03-13 00:56:48.982702 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:56:48.982708 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:56:48.982714 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:56:48.982721 | orchestrator | 2026-03-13 00:56:48.982727 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-03-13 00:56:48.982733 | orchestrator | Friday 13 March 2026 00:50:02 +0000 (0:00:01.612) 0:04:01.356 ********** 2026-03-13 00:56:48.982739 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.982746 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.982750 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.982754 | orchestrator | 2026-03-13 00:56:48.982758 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-03-13 00:56:48.982762 | orchestrator | Friday 13 March 2026 00:50:02 +0000 (0:00:00.264) 0:04:01.620 ********** 2026-03-13 00:56:48.982765 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.982769 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.982773 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.982776 | orchestrator | 2026-03-13 00:56:48.982780 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-03-13 00:56:48.982784 | orchestrator | Friday 13 March 2026 00:50:03 +0000 (0:00:00.414) 0:04:02.035 ********** 2026-03-13 00:56:48.982793 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:56:48.982822 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:56:48.982827 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:56:48.982830 | orchestrator | 2026-03-13 00:56:48.982834 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-03-13 00:56:48.982838 | orchestrator | Friday 13 March 2026 00:50:04 +0000 (0:00:01.416) 0:04:03.452 ********** 2026-03-13 00:56:48.982841 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:56:48.982845 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:56:48.982849 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:56:48.982853 | orchestrator | 2026-03-13 00:56:48.982856 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-03-13 00:56:48.982860 | orchestrator | Friday 13 March 2026 00:50:05 +0000 (0:00:01.223) 0:04:04.675 ********** 2026-03-13 00:56:48.982864 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.982867 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.982871 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.982875 | orchestrator | 2026-03-13 00:56:48.982879 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-03-13 00:56:48.982885 | orchestrator | Friday 13 March 2026 00:50:06 +0000 (0:00:00.337) 0:04:05.012 ********** 2026-03-13 00:56:48.982892 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-2, testbed-node-1 2026-03-13 00:56:48.982898 | orchestrator | 2026-03-13 00:56:48.982904 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-03-13 00:56:48.982911 | orchestrator | Friday 13 March 2026 00:50:06 +0000 (0:00:00.741) 0:04:05.754 ********** 2026-03-13 00:56:48.982917 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.982923 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.982930 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.982937 | orchestrator | 2026-03-13 00:56:48.982943 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-03-13 00:56:48.982949 | orchestrator | Friday 13 March 2026 00:50:07 +0000 (0:00:00.482) 0:04:06.237 ********** 2026-03-13 00:56:48.982960 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.982966 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.982971 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.982978 | orchestrator | 2026-03-13 00:56:48.982985 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-03-13 00:56:48.982992 | orchestrator | Friday 13 March 2026 00:50:07 +0000 (0:00:00.480) 0:04:06.718 ********** 2026-03-13 00:56:48.982998 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:56:48.983005 | orchestrator | 2026-03-13 00:56:48.983011 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-03-13 00:56:48.983017 | orchestrator | Friday 13 March 2026 00:50:08 +0000 (0:00:00.797) 0:04:07.516 ********** 2026-03-13 00:56:48.983024 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:56:48.983029 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:56:48.983036 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:56:48.983042 | orchestrator | 2026-03-13 00:56:48.983049 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-03-13 00:56:48.983055 | orchestrator | Friday 13 March 2026 00:50:10 +0000 (0:00:01.696) 0:04:09.212 ********** 2026-03-13 00:56:48.983062 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:56:48.983069 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:56:48.983076 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:56:48.983082 | orchestrator | 2026-03-13 00:56:48.983089 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-03-13 00:56:48.983095 | orchestrator | Friday 13 March 2026 00:50:11 +0000 (0:00:01.684) 0:04:10.897 ********** 2026-03-13 00:56:48.983100 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:56:48.983113 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:56:48.983120 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:56:48.983129 | orchestrator | 2026-03-13 00:56:48.983136 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-03-13 00:56:48.983142 | orchestrator | Friday 13 March 2026 00:50:13 +0000 (0:00:01.717) 0:04:12.615 ********** 2026-03-13 00:56:48.983148 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:56:48.983155 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:56:48.983161 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:56:48.983167 | orchestrator | 2026-03-13 00:56:48.983173 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-03-13 00:56:48.983179 | orchestrator | Friday 13 March 2026 00:50:16 +0000 (0:00:02.613) 0:04:15.229 ********** 2026-03-13 00:56:48.983186 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:56:48.983192 | orchestrator | 2026-03-13 00:56:48.983198 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-03-13 00:56:48.983204 | orchestrator | Friday 13 March 2026 00:50:16 +0000 (0:00:00.485) 0:04:15.714 ********** 2026-03-13 00:56:48.983211 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-03-13 00:56:48.983217 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.983223 | orchestrator | 2026-03-13 00:56:48.983230 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-03-13 00:56:48.983236 | orchestrator | Friday 13 March 2026 00:50:38 +0000 (0:00:21.703) 0:04:37.418 ********** 2026-03-13 00:56:48.983242 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.983248 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.983254 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.983261 | orchestrator | 2026-03-13 00:56:48.983267 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-03-13 00:56:48.983274 | orchestrator | Friday 13 March 2026 00:50:47 +0000 (0:00:08.540) 0:04:45.958 ********** 2026-03-13 00:56:48.983280 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.983286 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.983292 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.983298 | orchestrator | 2026-03-13 00:56:48.983305 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-03-13 00:56:48.983344 | orchestrator | Friday 13 March 2026 00:50:47 +0000 (0:00:00.432) 0:04:46.390 ********** 2026-03-13 00:56:48.983353 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1ce01cf06e9b5278954a7e92818893b38dddbc29'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-13 00:56:48.983362 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1ce01cf06e9b5278954a7e92818893b38dddbc29'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-13 00:56:48.983369 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1ce01cf06e9b5278954a7e92818893b38dddbc29'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-13 00:56:48.983382 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1ce01cf06e9b5278954a7e92818893b38dddbc29'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-13 00:56:48.983394 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1ce01cf06e9b5278954a7e92818893b38dddbc29'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-13 00:56:48.983402 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1ce01cf06e9b5278954a7e92818893b38dddbc29'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__1ce01cf06e9b5278954a7e92818893b38dddbc29'}])  2026-03-13 00:56:48.983410 | orchestrator | 2026-03-13 00:56:48.983417 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-13 00:56:48.983423 | orchestrator | Friday 13 March 2026 00:51:00 +0000 (0:00:13.346) 0:04:59.737 ********** 2026-03-13 00:56:48.983429 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.983436 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.983442 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.983448 | orchestrator | 2026-03-13 00:56:48.983454 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-13 00:56:48.983460 | orchestrator | Friday 13 March 2026 00:51:01 +0000 (0:00:00.340) 0:05:00.078 ********** 2026-03-13 00:56:48.983467 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:56:48.983473 | orchestrator | 2026-03-13 00:56:48.983479 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-13 00:56:48.983485 | orchestrator | Friday 13 March 2026 00:51:01 +0000 (0:00:00.742) 0:05:00.820 ********** 2026-03-13 00:56:48.983491 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.983498 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.983504 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.983511 | orchestrator | 2026-03-13 00:56:48.983517 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-13 00:56:48.983523 | orchestrator | Friday 13 March 2026 00:51:02 +0000 (0:00:00.335) 0:05:01.155 ********** 2026-03-13 00:56:48.983529 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.983536 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.983542 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.983548 | orchestrator | 2026-03-13 00:56:48.983554 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-13 00:56:48.983560 | orchestrator | Friday 13 March 2026 00:51:02 +0000 (0:00:00.308) 0:05:01.464 ********** 2026-03-13 00:56:48.983566 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-13 00:56:48.983572 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-13 00:56:48.983579 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-13 00:56:48.983585 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.983590 | orchestrator | 2026-03-13 00:56:48.983597 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-13 00:56:48.983603 | orchestrator | Friday 13 March 2026 00:51:03 +0000 (0:00:01.090) 0:05:02.555 ********** 2026-03-13 00:56:48.983610 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.983616 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.983664 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.983672 | orchestrator | 2026-03-13 00:56:48.983678 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-03-13 00:56:48.983685 | orchestrator | 2026-03-13 00:56:48.983691 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-13 00:56:48.983697 | orchestrator | Friday 13 March 2026 00:51:04 +0000 (0:00:00.512) 0:05:03.067 ********** 2026-03-13 00:56:48.983710 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:56:48.983717 | orchestrator | 2026-03-13 00:56:48.983723 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-13 00:56:48.983729 | orchestrator | Friday 13 March 2026 00:51:04 +0000 (0:00:00.431) 0:05:03.499 ********** 2026-03-13 00:56:48.983736 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:56:48.983742 | orchestrator | 2026-03-13 00:56:48.983748 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-13 00:56:48.983755 | orchestrator | Friday 13 March 2026 00:51:05 +0000 (0:00:00.615) 0:05:04.114 ********** 2026-03-13 00:56:48.983761 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.983767 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.983773 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.983779 | orchestrator | 2026-03-13 00:56:48.983785 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-13 00:56:48.983789 | orchestrator | Friday 13 March 2026 00:51:05 +0000 (0:00:00.656) 0:05:04.771 ********** 2026-03-13 00:56:48.983793 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.983796 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.983800 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.983804 | orchestrator | 2026-03-13 00:56:48.983807 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-13 00:56:48.983815 | orchestrator | Friday 13 March 2026 00:51:06 +0000 (0:00:00.300) 0:05:05.071 ********** 2026-03-13 00:56:48.983821 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.983827 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.983833 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.983839 | orchestrator | 2026-03-13 00:56:48.983845 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-13 00:56:48.983850 | orchestrator | Friday 13 March 2026 00:51:06 +0000 (0:00:00.453) 0:05:05.525 ********** 2026-03-13 00:56:48.983856 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.983862 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.983869 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.983875 | orchestrator | 2026-03-13 00:56:48.983881 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-13 00:56:48.983888 | orchestrator | Friday 13 March 2026 00:51:06 +0000 (0:00:00.273) 0:05:05.798 ********** 2026-03-13 00:56:48.983894 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.983900 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.983906 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.983912 | orchestrator | 2026-03-13 00:56:48.983918 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-13 00:56:48.983924 | orchestrator | Friday 13 March 2026 00:51:07 +0000 (0:00:00.596) 0:05:06.395 ********** 2026-03-13 00:56:48.983931 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.983937 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.983943 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.983949 | orchestrator | 2026-03-13 00:56:48.983955 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-13 00:56:48.983961 | orchestrator | Friday 13 March 2026 00:51:07 +0000 (0:00:00.267) 0:05:06.662 ********** 2026-03-13 00:56:48.983968 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.983974 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.983980 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.983986 | orchestrator | 2026-03-13 00:56:48.983992 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-13 00:56:48.983998 | orchestrator | Friday 13 March 2026 00:51:08 +0000 (0:00:00.422) 0:05:07.085 ********** 2026-03-13 00:56:48.984004 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.984010 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.984021 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.984027 | orchestrator | 2026-03-13 00:56:48.984033 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-13 00:56:48.984040 | orchestrator | Friday 13 March 2026 00:51:08 +0000 (0:00:00.719) 0:05:07.804 ********** 2026-03-13 00:56:48.984046 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.984053 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.984057 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.984061 | orchestrator | 2026-03-13 00:56:48.984064 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-13 00:56:48.984068 | orchestrator | Friday 13 March 2026 00:51:09 +0000 (0:00:00.790) 0:05:08.595 ********** 2026-03-13 00:56:48.984072 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.984075 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.984079 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.984083 | orchestrator | 2026-03-13 00:56:48.984086 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-13 00:56:48.984090 | orchestrator | Friday 13 March 2026 00:51:09 +0000 (0:00:00.270) 0:05:08.865 ********** 2026-03-13 00:56:48.984094 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.984097 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.984101 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.984105 | orchestrator | 2026-03-13 00:56:48.984108 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-13 00:56:48.984112 | orchestrator | Friday 13 March 2026 00:51:10 +0000 (0:00:00.482) 0:05:09.347 ********** 2026-03-13 00:56:48.984116 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.984120 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.984123 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.984127 | orchestrator | 2026-03-13 00:56:48.984131 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-13 00:56:48.984154 | orchestrator | Friday 13 March 2026 00:51:10 +0000 (0:00:00.267) 0:05:09.615 ********** 2026-03-13 00:56:48.984158 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.984162 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.984166 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.984170 | orchestrator | 2026-03-13 00:56:48.984173 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-13 00:56:48.984177 | orchestrator | Friday 13 March 2026 00:51:10 +0000 (0:00:00.287) 0:05:09.903 ********** 2026-03-13 00:56:48.984181 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.984184 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.984188 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.984192 | orchestrator | 2026-03-13 00:56:48.984195 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-13 00:56:48.984199 | orchestrator | Friday 13 March 2026 00:51:11 +0000 (0:00:00.275) 0:05:10.178 ********** 2026-03-13 00:56:48.984203 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.984207 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.984210 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.984214 | orchestrator | 2026-03-13 00:56:48.984218 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-13 00:56:48.984221 | orchestrator | Friday 13 March 2026 00:51:11 +0000 (0:00:00.298) 0:05:10.477 ********** 2026-03-13 00:56:48.984225 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.984229 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.984232 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.984236 | orchestrator | 2026-03-13 00:56:48.984240 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-13 00:56:48.984244 | orchestrator | Friday 13 March 2026 00:51:12 +0000 (0:00:00.588) 0:05:11.066 ********** 2026-03-13 00:56:48.984247 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.984251 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.984257 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.984267 | orchestrator | 2026-03-13 00:56:48.984273 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-13 00:56:48.984280 | orchestrator | Friday 13 March 2026 00:51:12 +0000 (0:00:00.311) 0:05:11.378 ********** 2026-03-13 00:56:48.984290 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.984298 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.984305 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.984311 | orchestrator | 2026-03-13 00:56:48.984318 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-13 00:56:48.984325 | orchestrator | Friday 13 March 2026 00:51:12 +0000 (0:00:00.348) 0:05:11.727 ********** 2026-03-13 00:56:48.984331 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.984338 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.984344 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.984351 | orchestrator | 2026-03-13 00:56:48.984358 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-13 00:56:48.984364 | orchestrator | Friday 13 March 2026 00:51:13 +0000 (0:00:00.750) 0:05:12.477 ********** 2026-03-13 00:56:48.984371 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-13 00:56:48.984377 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-13 00:56:48.984384 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-13 00:56:48.984391 | orchestrator | 2026-03-13 00:56:48.984397 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-13 00:56:48.984404 | orchestrator | Friday 13 March 2026 00:51:14 +0000 (0:00:00.576) 0:05:13.054 ********** 2026-03-13 00:56:48.984411 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:56:48.984419 | orchestrator | 2026-03-13 00:56:48.984426 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-03-13 00:56:48.984432 | orchestrator | Friday 13 March 2026 00:51:14 +0000 (0:00:00.444) 0:05:13.498 ********** 2026-03-13 00:56:48.984439 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:56:48.984446 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:56:48.984452 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:56:48.984459 | orchestrator | 2026-03-13 00:56:48.984465 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-03-13 00:56:48.984472 | orchestrator | Friday 13 March 2026 00:51:15 +0000 (0:00:00.665) 0:05:14.163 ********** 2026-03-13 00:56:48.984479 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.984485 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.984492 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.984499 | orchestrator | 2026-03-13 00:56:48.984506 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-03-13 00:56:48.984513 | orchestrator | Friday 13 March 2026 00:51:15 +0000 (0:00:00.418) 0:05:14.582 ********** 2026-03-13 00:56:48.984520 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-13 00:56:48.984527 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-13 00:56:48.984534 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-13 00:56:48.984541 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-03-13 00:56:48.984547 | orchestrator | 2026-03-13 00:56:48.984554 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-03-13 00:56:48.984560 | orchestrator | Friday 13 March 2026 00:51:25 +0000 (0:00:10.133) 0:05:24.715 ********** 2026-03-13 00:56:48.984566 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.984573 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.984580 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.984587 | orchestrator | 2026-03-13 00:56:48.984594 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-03-13 00:56:48.984601 | orchestrator | Friday 13 March 2026 00:51:26 +0000 (0:00:00.335) 0:05:25.050 ********** 2026-03-13 00:56:48.984607 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-13 00:56:48.984615 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-13 00:56:48.984668 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-13 00:56:48.984676 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-13 00:56:48.984682 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-13 00:56:48.984716 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-13 00:56:48.984723 | orchestrator | 2026-03-13 00:56:48.984730 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-03-13 00:56:48.984735 | orchestrator | Friday 13 March 2026 00:51:28 +0000 (0:00:02.254) 0:05:27.304 ********** 2026-03-13 00:56:48.984741 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-13 00:56:48.984746 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-13 00:56:48.984753 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-13 00:56:48.984758 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-13 00:56:48.984764 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-13 00:56:48.984770 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-13 00:56:48.984776 | orchestrator | 2026-03-13 00:56:48.984783 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-03-13 00:56:48.984788 | orchestrator | Friday 13 March 2026 00:51:29 +0000 (0:00:01.357) 0:05:28.662 ********** 2026-03-13 00:56:48.984795 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.984812 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.984819 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.984825 | orchestrator | 2026-03-13 00:56:48.984831 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-03-13 00:56:48.984837 | orchestrator | Friday 13 March 2026 00:51:30 +0000 (0:00:00.910) 0:05:29.572 ********** 2026-03-13 00:56:48.984843 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.984849 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.984855 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.984861 | orchestrator | 2026-03-13 00:56:48.984868 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-13 00:56:48.984874 | orchestrator | Friday 13 March 2026 00:51:30 +0000 (0:00:00.268) 0:05:29.841 ********** 2026-03-13 00:56:48.984880 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.984898 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.984910 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.984917 | orchestrator | 2026-03-13 00:56:48.984928 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-13 00:56:48.984933 | orchestrator | Friday 13 March 2026 00:51:31 +0000 (0:00:00.289) 0:05:30.130 ********** 2026-03-13 00:56:48.984939 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:56:48.984945 | orchestrator | 2026-03-13 00:56:48.984951 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-03-13 00:56:48.984956 | orchestrator | Friday 13 March 2026 00:51:31 +0000 (0:00:00.744) 0:05:30.875 ********** 2026-03-13 00:56:48.984962 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.984967 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.984973 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.984979 | orchestrator | 2026-03-13 00:56:48.984985 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-03-13 00:56:48.984992 | orchestrator | Friday 13 March 2026 00:51:32 +0000 (0:00:00.308) 0:05:31.183 ********** 2026-03-13 00:56:48.984998 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.985004 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.985011 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.985017 | orchestrator | 2026-03-13 00:56:48.985023 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-03-13 00:56:48.985030 | orchestrator | Friday 13 March 2026 00:51:32 +0000 (0:00:00.313) 0:05:31.497 ********** 2026-03-13 00:56:48.985036 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:56:48.985050 | orchestrator | 2026-03-13 00:56:48.985054 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-03-13 00:56:48.985060 | orchestrator | Friday 13 March 2026 00:51:33 +0000 (0:00:00.615) 0:05:32.112 ********** 2026-03-13 00:56:48.985065 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:56:48.985071 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:56:48.985076 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:56:48.985082 | orchestrator | 2026-03-13 00:56:48.985088 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-03-13 00:56:48.985093 | orchestrator | Friday 13 March 2026 00:51:34 +0000 (0:00:01.299) 0:05:33.411 ********** 2026-03-13 00:56:48.985099 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:56:48.985105 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:56:48.985110 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:56:48.985116 | orchestrator | 2026-03-13 00:56:48.985122 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-03-13 00:56:48.985129 | orchestrator | Friday 13 March 2026 00:51:35 +0000 (0:00:01.317) 0:05:34.729 ********** 2026-03-13 00:56:48.985135 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:56:48.985140 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:56:48.985146 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:56:48.985153 | orchestrator | 2026-03-13 00:56:48.985159 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-03-13 00:56:48.985167 | orchestrator | Friday 13 March 2026 00:51:37 +0000 (0:00:01.854) 0:05:36.583 ********** 2026-03-13 00:56:48.985172 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:56:48.985178 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:56:48.985184 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:56:48.985190 | orchestrator | 2026-03-13 00:56:48.985196 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-13 00:56:48.985201 | orchestrator | Friday 13 March 2026 00:51:39 +0000 (0:00:02.220) 0:05:38.804 ********** 2026-03-13 00:56:48.985207 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.985213 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.985220 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-03-13 00:56:48.985225 | orchestrator | 2026-03-13 00:56:48.985231 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-03-13 00:56:48.985237 | orchestrator | Friday 13 March 2026 00:51:40 +0000 (0:00:00.347) 0:05:39.152 ********** 2026-03-13 00:56:48.985282 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-03-13 00:56:48.985290 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-03-13 00:56:48.985296 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-03-13 00:56:48.985302 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-03-13 00:56:48.985308 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-03-13 00:56:48.985315 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (25 retries left). 2026-03-13 00:56:48.985321 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-13 00:56:48.985326 | orchestrator | 2026-03-13 00:56:48.985332 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-03-13 00:56:48.985337 | orchestrator | Friday 13 March 2026 00:52:16 +0000 (0:00:36.083) 0:06:15.235 ********** 2026-03-13 00:56:48.985344 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-13 00:56:48.985350 | orchestrator | 2026-03-13 00:56:48.985356 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-03-13 00:56:48.985368 | orchestrator | Friday 13 March 2026 00:52:17 +0000 (0:00:01.316) 0:06:16.551 ********** 2026-03-13 00:56:48.985375 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.985382 | orchestrator | 2026-03-13 00:56:48.985387 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-03-13 00:56:48.985394 | orchestrator | Friday 13 March 2026 00:52:17 +0000 (0:00:00.302) 0:06:16.853 ********** 2026-03-13 00:56:48.985400 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.985407 | orchestrator | 2026-03-13 00:56:48.985411 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-03-13 00:56:48.985419 | orchestrator | Friday 13 March 2026 00:52:18 +0000 (0:00:00.129) 0:06:16.983 ********** 2026-03-13 00:56:48.985423 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-03-13 00:56:48.985427 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-03-13 00:56:48.985431 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-03-13 00:56:48.985434 | orchestrator | 2026-03-13 00:56:48.985438 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-03-13 00:56:48.985442 | orchestrator | Friday 13 March 2026 00:52:24 +0000 (0:00:06.608) 0:06:23.591 ********** 2026-03-13 00:56:48.985445 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-03-13 00:56:48.985449 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-03-13 00:56:48.985453 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-03-13 00:56:48.985457 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-03-13 00:56:48.985460 | orchestrator | 2026-03-13 00:56:48.985464 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-13 00:56:48.985468 | orchestrator | Friday 13 March 2026 00:52:29 +0000 (0:00:05.184) 0:06:28.776 ********** 2026-03-13 00:56:48.985472 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:56:48.985475 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:56:48.985479 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:56:48.985483 | orchestrator | 2026-03-13 00:56:48.985486 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-13 00:56:48.985490 | orchestrator | Friday 13 March 2026 00:52:30 +0000 (0:00:00.673) 0:06:29.449 ********** 2026-03-13 00:56:48.985494 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:56:48.985497 | orchestrator | 2026-03-13 00:56:48.985501 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-13 00:56:48.985505 | orchestrator | Friday 13 March 2026 00:52:31 +0000 (0:00:00.600) 0:06:30.049 ********** 2026-03-13 00:56:48.985508 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.985512 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.985516 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.985520 | orchestrator | 2026-03-13 00:56:48.985523 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-13 00:56:48.985527 | orchestrator | Friday 13 March 2026 00:52:31 +0000 (0:00:00.300) 0:06:30.349 ********** 2026-03-13 00:56:48.985531 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:56:48.985535 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:56:48.985538 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:56:48.985542 | orchestrator | 2026-03-13 00:56:48.985546 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-13 00:56:48.985549 | orchestrator | Friday 13 March 2026 00:52:32 +0000 (0:00:01.239) 0:06:31.589 ********** 2026-03-13 00:56:48.985553 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-13 00:56:48.985557 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-13 00:56:48.985561 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-13 00:56:48.985564 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.985568 | orchestrator | 2026-03-13 00:56:48.985575 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-13 00:56:48.985578 | orchestrator | Friday 13 March 2026 00:52:33 +0000 (0:00:00.751) 0:06:32.340 ********** 2026-03-13 00:56:48.985582 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.985586 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.985589 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.985593 | orchestrator | 2026-03-13 00:56:48.985597 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-03-13 00:56:48.985601 | orchestrator | 2026-03-13 00:56:48.985604 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-13 00:56:48.985626 | orchestrator | Friday 13 March 2026 00:52:34 +0000 (0:00:00.645) 0:06:32.986 ********** 2026-03-13 00:56:48.985645 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-13 00:56:48.985652 | orchestrator | 2026-03-13 00:56:48.985658 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-13 00:56:48.985664 | orchestrator | Friday 13 March 2026 00:52:34 +0000 (0:00:00.441) 0:06:33.427 ********** 2026-03-13 00:56:48.985670 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-13 00:56:48.985676 | orchestrator | 2026-03-13 00:56:48.985683 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-13 00:56:48.985689 | orchestrator | Friday 13 March 2026 00:52:35 +0000 (0:00:00.623) 0:06:34.050 ********** 2026-03-13 00:56:48.985696 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.985702 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.985707 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.985714 | orchestrator | 2026-03-13 00:56:48.985718 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-13 00:56:48.985722 | orchestrator | Friday 13 March 2026 00:52:35 +0000 (0:00:00.263) 0:06:34.314 ********** 2026-03-13 00:56:48.985726 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.985729 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.985733 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.985737 | orchestrator | 2026-03-13 00:56:48.985741 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-13 00:56:48.985744 | orchestrator | Friday 13 March 2026 00:52:36 +0000 (0:00:00.671) 0:06:34.986 ********** 2026-03-13 00:56:48.985748 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.985752 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.985755 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.985759 | orchestrator | 2026-03-13 00:56:48.985763 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-13 00:56:48.985770 | orchestrator | Friday 13 March 2026 00:52:36 +0000 (0:00:00.611) 0:06:35.597 ********** 2026-03-13 00:56:48.985774 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.985777 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.985781 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.985785 | orchestrator | 2026-03-13 00:56:48.985788 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-13 00:56:48.985792 | orchestrator | Friday 13 March 2026 00:52:37 +0000 (0:00:00.830) 0:06:36.427 ********** 2026-03-13 00:56:48.985796 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.985800 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.985803 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.985807 | orchestrator | 2026-03-13 00:56:48.985811 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-13 00:56:48.985815 | orchestrator | Friday 13 March 2026 00:52:37 +0000 (0:00:00.274) 0:06:36.701 ********** 2026-03-13 00:56:48.985818 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.985822 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.985826 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.985829 | orchestrator | 2026-03-13 00:56:48.985833 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-13 00:56:48.985840 | orchestrator | Friday 13 March 2026 00:52:38 +0000 (0:00:00.273) 0:06:36.975 ********** 2026-03-13 00:56:48.985844 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.985848 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.985851 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.985855 | orchestrator | 2026-03-13 00:56:48.985859 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-13 00:56:48.985862 | orchestrator | Friday 13 March 2026 00:52:38 +0000 (0:00:00.275) 0:06:37.250 ********** 2026-03-13 00:56:48.985866 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.985870 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.985874 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.985877 | orchestrator | 2026-03-13 00:56:48.985881 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-13 00:56:48.985885 | orchestrator | Friday 13 March 2026 00:52:39 +0000 (0:00:00.860) 0:06:38.111 ********** 2026-03-13 00:56:48.985889 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.985892 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.985896 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.985900 | orchestrator | 2026-03-13 00:56:48.985903 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-13 00:56:48.985907 | orchestrator | Friday 13 March 2026 00:52:39 +0000 (0:00:00.663) 0:06:38.774 ********** 2026-03-13 00:56:48.985912 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.985918 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.985924 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.985930 | orchestrator | 2026-03-13 00:56:48.985936 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-13 00:56:48.985942 | orchestrator | Friday 13 March 2026 00:52:40 +0000 (0:00:00.330) 0:06:39.104 ********** 2026-03-13 00:56:48.985948 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.985953 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.985959 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.985965 | orchestrator | 2026-03-13 00:56:48.985970 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-13 00:56:48.985977 | orchestrator | Friday 13 March 2026 00:52:40 +0000 (0:00:00.293) 0:06:39.398 ********** 2026-03-13 00:56:48.985983 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.985989 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.985994 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.986001 | orchestrator | 2026-03-13 00:56:48.986007 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-13 00:56:48.986040 | orchestrator | Friday 13 March 2026 00:52:40 +0000 (0:00:00.469) 0:06:39.867 ********** 2026-03-13 00:56:48.986048 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.986054 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.986060 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.986067 | orchestrator | 2026-03-13 00:56:48.986073 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-13 00:56:48.986085 | orchestrator | Friday 13 March 2026 00:52:41 +0000 (0:00:00.288) 0:06:40.156 ********** 2026-03-13 00:56:48.986092 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.986099 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.986105 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.986112 | orchestrator | 2026-03-13 00:56:48.986118 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-13 00:56:48.986124 | orchestrator | Friday 13 March 2026 00:52:41 +0000 (0:00:00.332) 0:06:40.489 ********** 2026-03-13 00:56:48.986130 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.986136 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.986142 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.986148 | orchestrator | 2026-03-13 00:56:48.986154 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-13 00:56:48.986160 | orchestrator | Friday 13 March 2026 00:52:41 +0000 (0:00:00.267) 0:06:40.756 ********** 2026-03-13 00:56:48.986171 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.986177 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.986183 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.986189 | orchestrator | 2026-03-13 00:56:48.986194 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-13 00:56:48.986200 | orchestrator | Friday 13 March 2026 00:52:42 +0000 (0:00:00.418) 0:06:41.174 ********** 2026-03-13 00:56:48.986206 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.986212 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.986218 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.986224 | orchestrator | 2026-03-13 00:56:48.986229 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-13 00:56:48.986235 | orchestrator | Friday 13 March 2026 00:52:42 +0000 (0:00:00.261) 0:06:41.436 ********** 2026-03-13 00:56:48.986241 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.986246 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.986252 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.986258 | orchestrator | 2026-03-13 00:56:48.986264 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-13 00:56:48.986275 | orchestrator | Friday 13 March 2026 00:52:42 +0000 (0:00:00.302) 0:06:41.738 ********** 2026-03-13 00:56:48.986282 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.986287 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.986293 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.986299 | orchestrator | 2026-03-13 00:56:48.986305 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-03-13 00:56:48.986311 | orchestrator | Friday 13 March 2026 00:52:43 +0000 (0:00:00.614) 0:06:42.353 ********** 2026-03-13 00:56:48.986316 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.986322 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.986327 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.986333 | orchestrator | 2026-03-13 00:56:48.986339 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-03-13 00:56:48.986344 | orchestrator | Friday 13 March 2026 00:52:43 +0000 (0:00:00.290) 0:06:42.643 ********** 2026-03-13 00:56:48.986351 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-13 00:56:48.986357 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-13 00:56:48.986363 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-13 00:56:48.986369 | orchestrator | 2026-03-13 00:56:48.986375 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-03-13 00:56:48.986381 | orchestrator | Friday 13 March 2026 00:52:44 +0000 (0:00:00.543) 0:06:43.187 ********** 2026-03-13 00:56:48.986387 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-13 00:56:48.986393 | orchestrator | 2026-03-13 00:56:48.986399 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-03-13 00:56:48.986405 | orchestrator | Friday 13 March 2026 00:52:44 +0000 (0:00:00.469) 0:06:43.657 ********** 2026-03-13 00:56:48.986411 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.986417 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.986423 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.986428 | orchestrator | 2026-03-13 00:56:48.986434 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-03-13 00:56:48.986440 | orchestrator | Friday 13 March 2026 00:52:45 +0000 (0:00:00.441) 0:06:44.098 ********** 2026-03-13 00:56:48.986446 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.986452 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.986458 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.986464 | orchestrator | 2026-03-13 00:56:48.986470 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-03-13 00:56:48.986476 | orchestrator | Friday 13 March 2026 00:52:45 +0000 (0:00:00.305) 0:06:44.403 ********** 2026-03-13 00:56:48.986487 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.986493 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.986499 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.986505 | orchestrator | 2026-03-13 00:56:48.986511 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-03-13 00:56:48.986516 | orchestrator | Friday 13 March 2026 00:52:46 +0000 (0:00:00.682) 0:06:45.086 ********** 2026-03-13 00:56:48.986522 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.986527 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.986533 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.986539 | orchestrator | 2026-03-13 00:56:48.986545 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-03-13 00:56:48.986551 | orchestrator | Friday 13 March 2026 00:52:46 +0000 (0:00:00.343) 0:06:45.429 ********** 2026-03-13 00:56:48.986557 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-13 00:56:48.986563 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-13 00:56:48.986570 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-13 00:56:48.986585 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-13 00:56:48.986592 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-13 00:56:48.986599 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-13 00:56:48.986605 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-13 00:56:48.986612 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-13 00:56:48.986618 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-13 00:56:48.986625 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-13 00:56:48.986645 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-13 00:56:48.986652 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-13 00:56:48.986659 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-13 00:56:48.986665 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-13 00:56:48.986671 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-13 00:56:48.986678 | orchestrator | 2026-03-13 00:56:48.986684 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-03-13 00:56:48.986690 | orchestrator | Friday 13 March 2026 00:52:49 +0000 (0:00:03.218) 0:06:48.648 ********** 2026-03-13 00:56:48.986696 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.986702 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.986708 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.986715 | orchestrator | 2026-03-13 00:56:48.986721 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-03-13 00:56:48.986732 | orchestrator | Friday 13 March 2026 00:52:49 +0000 (0:00:00.284) 0:06:48.932 ********** 2026-03-13 00:56:48.986737 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-13 00:56:48.986741 | orchestrator | 2026-03-13 00:56:48.986745 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-03-13 00:56:48.986748 | orchestrator | Friday 13 March 2026 00:52:50 +0000 (0:00:00.472) 0:06:49.404 ********** 2026-03-13 00:56:48.986752 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-13 00:56:48.986756 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-13 00:56:48.986760 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-13 00:56:48.986768 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-03-13 00:56:48.986772 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-03-13 00:56:48.986775 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-03-13 00:56:48.986779 | orchestrator | 2026-03-13 00:56:48.986783 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-03-13 00:56:48.986787 | orchestrator | Friday 13 March 2026 00:52:51 +0000 (0:00:01.158) 0:06:50.563 ********** 2026-03-13 00:56:48.986790 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-13 00:56:48.986794 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-13 00:56:48.986798 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-13 00:56:48.986801 | orchestrator | 2026-03-13 00:56:48.986805 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-03-13 00:56:48.986809 | orchestrator | Friday 13 March 2026 00:52:53 +0000 (0:00:01.841) 0:06:52.404 ********** 2026-03-13 00:56:48.986813 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-13 00:56:48.986816 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-13 00:56:48.986820 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:56:48.986824 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-13 00:56:48.986827 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-13 00:56:48.986831 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:56:48.986835 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-13 00:56:48.986838 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-13 00:56:48.986842 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:56:48.986846 | orchestrator | 2026-03-13 00:56:48.986849 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-03-13 00:56:48.986853 | orchestrator | Friday 13 March 2026 00:52:54 +0000 (0:00:01.034) 0:06:53.439 ********** 2026-03-13 00:56:48.986857 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-13 00:56:48.986861 | orchestrator | 2026-03-13 00:56:48.986864 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-03-13 00:56:48.986868 | orchestrator | Friday 13 March 2026 00:52:57 +0000 (0:00:02.722) 0:06:56.161 ********** 2026-03-13 00:56:48.986872 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-13 00:56:48.986875 | orchestrator | 2026-03-13 00:56:48.986879 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-03-13 00:56:48.986883 | orchestrator | Friday 13 March 2026 00:52:57 +0000 (0:00:00.759) 0:06:56.921 ********** 2026-03-13 00:56:48.986887 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-119e494c-61db-56d2-84c4-ae65d8356f6a', 'data_vg': 'ceph-119e494c-61db-56d2-84c4-ae65d8356f6a'}) 2026-03-13 00:56:48.986891 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-49707cb0-36ac-571b-bf56-7288c46886ca', 'data_vg': 'ceph-49707cb0-36ac-571b-bf56-7288c46886ca'}) 2026-03-13 00:56:48.986900 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b5494c86-4b11-53e5-88ab-5da9d8a68a1e', 'data_vg': 'ceph-b5494c86-4b11-53e5-88ab-5da9d8a68a1e'}) 2026-03-13 00:56:48.986904 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-798cee0b-732e-51b2-a8a3-29d8c2932297', 'data_vg': 'ceph-798cee0b-732e-51b2-a8a3-29d8c2932297'}) 2026-03-13 00:56:48.986908 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-5854fe4a-6d96-56a2-8017-73d7ac8736b8', 'data_vg': 'ceph-5854fe4a-6d96-56a2-8017-73d7ac8736b8'}) 2026-03-13 00:56:48.986911 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b7299377-1bbd-5436-9d58-2dd820a08a2f', 'data_vg': 'ceph-b7299377-1bbd-5436-9d58-2dd820a08a2f'}) 2026-03-13 00:56:48.986915 | orchestrator | 2026-03-13 00:56:48.986919 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-03-13 00:56:48.986925 | orchestrator | Friday 13 March 2026 00:53:38 +0000 (0:00:40.226) 0:07:37.147 ********** 2026-03-13 00:56:48.986929 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.986933 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.986937 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.986940 | orchestrator | 2026-03-13 00:56:48.986944 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-03-13 00:56:48.986948 | orchestrator | Friday 13 March 2026 00:53:38 +0000 (0:00:00.289) 0:07:37.437 ********** 2026-03-13 00:56:48.986952 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-13 00:56:48.986955 | orchestrator | 2026-03-13 00:56:48.986959 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-03-13 00:56:48.986963 | orchestrator | Friday 13 March 2026 00:53:39 +0000 (0:00:00.617) 0:07:38.055 ********** 2026-03-13 00:56:48.986969 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.986973 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.986976 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.986980 | orchestrator | 2026-03-13 00:56:48.986984 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-03-13 00:56:48.986988 | orchestrator | Friday 13 March 2026 00:53:39 +0000 (0:00:00.645) 0:07:38.700 ********** 2026-03-13 00:56:48.986991 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.986995 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.986999 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.987003 | orchestrator | 2026-03-13 00:56:48.987006 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-03-13 00:56:48.987010 | orchestrator | Friday 13 March 2026 00:53:42 +0000 (0:00:02.859) 0:07:41.559 ********** 2026-03-13 00:56:48.987014 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-13 00:56:48.987018 | orchestrator | 2026-03-13 00:56:48.987021 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-03-13 00:56:48.987025 | orchestrator | Friday 13 March 2026 00:53:43 +0000 (0:00:00.709) 0:07:42.269 ********** 2026-03-13 00:56:48.987029 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:56:48.987032 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:56:48.987036 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:56:48.987040 | orchestrator | 2026-03-13 00:56:48.987044 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-03-13 00:56:48.987047 | orchestrator | Friday 13 March 2026 00:53:44 +0000 (0:00:01.227) 0:07:43.496 ********** 2026-03-13 00:56:48.987051 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:56:48.987055 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:56:48.987059 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:56:48.987062 | orchestrator | 2026-03-13 00:56:48.987066 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-03-13 00:56:48.987070 | orchestrator | Friday 13 March 2026 00:53:45 +0000 (0:00:01.053) 0:07:44.549 ********** 2026-03-13 00:56:48.987073 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:56:48.987077 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:56:48.987081 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:56:48.987085 | orchestrator | 2026-03-13 00:56:48.987088 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-03-13 00:56:48.987092 | orchestrator | Friday 13 March 2026 00:53:47 +0000 (0:00:01.718) 0:07:46.268 ********** 2026-03-13 00:56:48.987096 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.987099 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.987103 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.987107 | orchestrator | 2026-03-13 00:56:48.987111 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-03-13 00:56:48.987114 | orchestrator | Friday 13 March 2026 00:53:47 +0000 (0:00:00.431) 0:07:46.699 ********** 2026-03-13 00:56:48.987118 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.987127 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.987131 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.987134 | orchestrator | 2026-03-13 00:56:48.987138 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-03-13 00:56:48.987142 | orchestrator | Friday 13 March 2026 00:53:48 +0000 (0:00:00.290) 0:07:46.989 ********** 2026-03-13 00:56:48.987146 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-13 00:56:48.987149 | orchestrator | ok: [testbed-node-4] => (item=4) 2026-03-13 00:56:48.987153 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-03-13 00:56:48.987157 | orchestrator | ok: [testbed-node-3] => (item=5) 2026-03-13 00:56:48.987160 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-03-13 00:56:48.987164 | orchestrator | ok: [testbed-node-5] => (item=3) 2026-03-13 00:56:48.987168 | orchestrator | 2026-03-13 00:56:48.987172 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-03-13 00:56:48.987175 | orchestrator | Friday 13 March 2026 00:53:49 +0000 (0:00:00.981) 0:07:47.970 ********** 2026-03-13 00:56:48.987179 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-13 00:56:48.987183 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-03-13 00:56:48.987189 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-03-13 00:56:48.987193 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-03-13 00:56:48.987197 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-03-13 00:56:48.987200 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-03-13 00:56:48.987204 | orchestrator | 2026-03-13 00:56:48.987208 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-03-13 00:56:48.987211 | orchestrator | Friday 13 March 2026 00:53:50 +0000 (0:00:01.921) 0:07:49.892 ********** 2026-03-13 00:56:48.987215 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-03-13 00:56:48.987219 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-03-13 00:56:48.987223 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-13 00:56:48.987226 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-03-13 00:56:48.987230 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-03-13 00:56:48.987234 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-03-13 00:56:48.987238 | orchestrator | 2026-03-13 00:56:48.987241 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-03-13 00:56:48.987245 | orchestrator | Friday 13 March 2026 00:53:54 +0000 (0:00:03.511) 0:07:53.403 ********** 2026-03-13 00:56:48.987249 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.987253 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.987256 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-13 00:56:48.987260 | orchestrator | 2026-03-13 00:56:48.987264 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-03-13 00:56:48.987267 | orchestrator | Friday 13 March 2026 00:53:57 +0000 (0:00:02.571) 0:07:55.974 ********** 2026-03-13 00:56:48.987271 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.987275 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.987279 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-03-13 00:56:48.987282 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-13 00:56:48.987286 | orchestrator | 2026-03-13 00:56:48.987292 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-03-13 00:56:48.987296 | orchestrator | Friday 13 March 2026 00:54:09 +0000 (0:00:12.284) 0:08:08.259 ********** 2026-03-13 00:56:48.987300 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.987303 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.987307 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.987311 | orchestrator | 2026-03-13 00:56:48.987315 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-13 00:56:48.987318 | orchestrator | Friday 13 March 2026 00:54:10 +0000 (0:00:01.074) 0:08:09.333 ********** 2026-03-13 00:56:48.987322 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.987329 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.987332 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.987347 | orchestrator | 2026-03-13 00:56:48.987351 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-13 00:56:48.987355 | orchestrator | Friday 13 March 2026 00:54:10 +0000 (0:00:00.285) 0:08:09.619 ********** 2026-03-13 00:56:48.987359 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-13 00:56:48.987363 | orchestrator | 2026-03-13 00:56:48.987367 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-13 00:56:48.987373 | orchestrator | Friday 13 March 2026 00:54:11 +0000 (0:00:00.631) 0:08:10.250 ********** 2026-03-13 00:56:48.987380 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-13 00:56:48.987386 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-13 00:56:48.987392 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-13 00:56:48.987398 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.987405 | orchestrator | 2026-03-13 00:56:48.987412 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-13 00:56:48.987418 | orchestrator | Friday 13 March 2026 00:54:11 +0000 (0:00:00.376) 0:08:10.626 ********** 2026-03-13 00:56:48.987425 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.987431 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.987438 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.987444 | orchestrator | 2026-03-13 00:56:48.987450 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-13 00:56:48.987457 | orchestrator | Friday 13 March 2026 00:54:11 +0000 (0:00:00.281) 0:08:10.908 ********** 2026-03-13 00:56:48.987461 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.987465 | orchestrator | 2026-03-13 00:56:48.987468 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-13 00:56:48.987472 | orchestrator | Friday 13 March 2026 00:54:12 +0000 (0:00:00.200) 0:08:11.109 ********** 2026-03-13 00:56:48.987476 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.987479 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.987483 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.987487 | orchestrator | 2026-03-13 00:56:48.987490 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-13 00:56:48.987494 | orchestrator | Friday 13 March 2026 00:54:12 +0000 (0:00:00.287) 0:08:11.397 ********** 2026-03-13 00:56:48.987498 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.987502 | orchestrator | 2026-03-13 00:56:48.987505 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-13 00:56:48.987509 | orchestrator | Friday 13 March 2026 00:54:12 +0000 (0:00:00.196) 0:08:11.594 ********** 2026-03-13 00:56:48.987513 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.987516 | orchestrator | 2026-03-13 00:56:48.987520 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-13 00:56:48.987524 | orchestrator | Friday 13 March 2026 00:54:12 +0000 (0:00:00.201) 0:08:11.795 ********** 2026-03-13 00:56:48.987527 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.987531 | orchestrator | 2026-03-13 00:56:48.987535 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-13 00:56:48.987539 | orchestrator | Friday 13 March 2026 00:54:12 +0000 (0:00:00.109) 0:08:11.904 ********** 2026-03-13 00:56:48.987545 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.987549 | orchestrator | 2026-03-13 00:56:48.987553 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-13 00:56:48.987556 | orchestrator | Friday 13 March 2026 00:54:13 +0000 (0:00:00.578) 0:08:12.482 ********** 2026-03-13 00:56:48.987560 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.987564 | orchestrator | 2026-03-13 00:56:48.987568 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-13 00:56:48.987574 | orchestrator | Friday 13 March 2026 00:54:13 +0000 (0:00:00.232) 0:08:12.715 ********** 2026-03-13 00:56:48.987578 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-13 00:56:48.987582 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-13 00:56:48.987586 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-13 00:56:48.987589 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.987593 | orchestrator | 2026-03-13 00:56:48.987597 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-13 00:56:48.987601 | orchestrator | Friday 13 March 2026 00:54:14 +0000 (0:00:00.338) 0:08:13.054 ********** 2026-03-13 00:56:48.987604 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.987608 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.987612 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.987615 | orchestrator | 2026-03-13 00:56:48.987619 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-13 00:56:48.987623 | orchestrator | Friday 13 March 2026 00:54:14 +0000 (0:00:00.273) 0:08:13.327 ********** 2026-03-13 00:56:48.987627 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.987662 | orchestrator | 2026-03-13 00:56:48.987666 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-13 00:56:48.987670 | orchestrator | Friday 13 March 2026 00:54:14 +0000 (0:00:00.201) 0:08:13.529 ********** 2026-03-13 00:56:48.987674 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.987678 | orchestrator | 2026-03-13 00:56:48.987684 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-03-13 00:56:48.987688 | orchestrator | 2026-03-13 00:56:48.987691 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-13 00:56:48.987695 | orchestrator | Friday 13 March 2026 00:54:15 +0000 (0:00:00.747) 0:08:14.276 ********** 2026-03-13 00:56:48.987699 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:56:48.987704 | orchestrator | 2026-03-13 00:56:48.987708 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-13 00:56:48.987712 | orchestrator | Friday 13 March 2026 00:54:16 +0000 (0:00:00.981) 0:08:15.257 ********** 2026-03-13 00:56:48.987716 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:56:48.987720 | orchestrator | 2026-03-13 00:56:48.987723 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-13 00:56:48.987727 | orchestrator | Friday 13 March 2026 00:54:17 +0000 (0:00:01.091) 0:08:16.349 ********** 2026-03-13 00:56:48.987731 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.987734 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.987738 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.987742 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.987746 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.987749 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.987753 | orchestrator | 2026-03-13 00:56:48.987757 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-13 00:56:48.987760 | orchestrator | Friday 13 March 2026 00:54:18 +0000 (0:00:00.962) 0:08:17.312 ********** 2026-03-13 00:56:48.987764 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.987768 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.987772 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.987775 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.987779 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.987783 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.987787 | orchestrator | 2026-03-13 00:56:48.987790 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-13 00:56:48.987794 | orchestrator | Friday 13 March 2026 00:54:19 +0000 (0:00:00.674) 0:08:17.986 ********** 2026-03-13 00:56:48.987801 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.987805 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.987808 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.987812 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.987816 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.987819 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.987823 | orchestrator | 2026-03-13 00:56:48.987827 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-13 00:56:48.987830 | orchestrator | Friday 13 March 2026 00:54:19 +0000 (0:00:00.831) 0:08:18.817 ********** 2026-03-13 00:56:48.987834 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.987838 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.987842 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.987845 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.987849 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.987853 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.987856 | orchestrator | 2026-03-13 00:56:48.987860 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-13 00:56:48.987864 | orchestrator | Friday 13 March 2026 00:54:20 +0000 (0:00:00.756) 0:08:19.573 ********** 2026-03-13 00:56:48.987867 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.987871 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.987875 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.987879 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.987882 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.987886 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.987890 | orchestrator | 2026-03-13 00:56:48.987894 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-13 00:56:48.987900 | orchestrator | Friday 13 March 2026 00:54:21 +0000 (0:00:01.087) 0:08:20.660 ********** 2026-03-13 00:56:48.987904 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.987908 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.987911 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.987915 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.987919 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.987922 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.987926 | orchestrator | 2026-03-13 00:56:48.987930 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-13 00:56:48.987934 | orchestrator | Friday 13 March 2026 00:54:22 +0000 (0:00:00.505) 0:08:21.166 ********** 2026-03-13 00:56:48.987937 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.987941 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.987945 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.987948 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.987952 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.987956 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.987960 | orchestrator | 2026-03-13 00:56:48.987963 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-13 00:56:48.987967 | orchestrator | Friday 13 March 2026 00:54:22 +0000 (0:00:00.692) 0:08:21.859 ********** 2026-03-13 00:56:48.987971 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.987975 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.987978 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.987982 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.987986 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.987989 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.987993 | orchestrator | 2026-03-13 00:56:48.987997 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-13 00:56:48.988001 | orchestrator | Friday 13 March 2026 00:54:23 +0000 (0:00:00.958) 0:08:22.818 ********** 2026-03-13 00:56:48.988004 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.988008 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.988012 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.988015 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.988022 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.988025 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.988029 | orchestrator | 2026-03-13 00:56:48.988035 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-13 00:56:48.988039 | orchestrator | Friday 13 March 2026 00:54:24 +0000 (0:00:01.124) 0:08:23.942 ********** 2026-03-13 00:56:48.988042 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.988046 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.988050 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.988054 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.988057 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.988061 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.988065 | orchestrator | 2026-03-13 00:56:48.988069 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-13 00:56:48.988072 | orchestrator | Friday 13 March 2026 00:54:25 +0000 (0:00:00.495) 0:08:24.437 ********** 2026-03-13 00:56:48.988076 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.988080 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.988083 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.988087 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.988091 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.988095 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.988098 | orchestrator | 2026-03-13 00:56:48.988102 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-13 00:56:48.988106 | orchestrator | Friday 13 March 2026 00:54:26 +0000 (0:00:00.665) 0:08:25.102 ********** 2026-03-13 00:56:48.988110 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.988113 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.988117 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.988121 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.988124 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.988128 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.988132 | orchestrator | 2026-03-13 00:56:48.988136 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-13 00:56:48.988139 | orchestrator | Friday 13 March 2026 00:54:26 +0000 (0:00:00.537) 0:08:25.640 ********** 2026-03-13 00:56:48.988143 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.988147 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.988150 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.988154 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.988158 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.988162 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.988165 | orchestrator | 2026-03-13 00:56:48.988169 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-13 00:56:48.988173 | orchestrator | Friday 13 March 2026 00:54:27 +0000 (0:00:00.740) 0:08:26.380 ********** 2026-03-13 00:56:48.988177 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.988180 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.988184 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.988188 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.988191 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.988195 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.988199 | orchestrator | 2026-03-13 00:56:48.988203 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-13 00:56:48.988206 | orchestrator | Friday 13 March 2026 00:54:27 +0000 (0:00:00.485) 0:08:26.866 ********** 2026-03-13 00:56:48.988210 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.988214 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.988218 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.988224 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.988230 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.988236 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.988245 | orchestrator | 2026-03-13 00:56:48.988253 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-13 00:56:48.988265 | orchestrator | Friday 13 March 2026 00:54:28 +0000 (0:00:00.761) 0:08:27.627 ********** 2026-03-13 00:56:48.988271 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.988277 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.988282 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.988287 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:48.988293 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:48.988299 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:48.988305 | orchestrator | 2026-03-13 00:56:48.988312 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-13 00:56:48.988322 | orchestrator | Friday 13 March 2026 00:54:29 +0000 (0:00:00.566) 0:08:28.194 ********** 2026-03-13 00:56:48.988328 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.988335 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.988341 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.988347 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.988354 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.988359 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.988363 | orchestrator | 2026-03-13 00:56:48.988368 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-13 00:56:48.988374 | orchestrator | Friday 13 March 2026 00:54:29 +0000 (0:00:00.678) 0:08:28.872 ********** 2026-03-13 00:56:48.988384 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.988391 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.988397 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.988402 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.988408 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.988414 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.988420 | orchestrator | 2026-03-13 00:56:48.988427 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-13 00:56:48.988433 | orchestrator | Friday 13 March 2026 00:54:30 +0000 (0:00:00.668) 0:08:29.540 ********** 2026-03-13 00:56:48.988439 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.988445 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.988450 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.988454 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.988458 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.988462 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.988469 | orchestrator | 2026-03-13 00:56:48.988475 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-03-13 00:56:48.988481 | orchestrator | Friday 13 March 2026 00:54:31 +0000 (0:00:01.314) 0:08:30.855 ********** 2026-03-13 00:56:48.988488 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-13 00:56:48.988494 | orchestrator | 2026-03-13 00:56:48.988500 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-03-13 00:56:48.988510 | orchestrator | Friday 13 March 2026 00:54:35 +0000 (0:00:03.843) 0:08:34.699 ********** 2026-03-13 00:56:48.988516 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-13 00:56:48.988523 | orchestrator | 2026-03-13 00:56:48.988528 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-03-13 00:56:48.988531 | orchestrator | Friday 13 March 2026 00:54:37 +0000 (0:00:02.106) 0:08:36.806 ********** 2026-03-13 00:56:48.988535 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:56:48.988539 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:56:48.988542 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.988546 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:56:48.988550 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:56:48.988553 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:56:48.988557 | orchestrator | 2026-03-13 00:56:48.988561 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-03-13 00:56:48.988565 | orchestrator | Friday 13 March 2026 00:54:39 +0000 (0:00:01.884) 0:08:38.690 ********** 2026-03-13 00:56:48.988568 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:56:48.988572 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:56:48.988579 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:56:48.988583 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:56:48.988586 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:56:48.988590 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:56:48.988594 | orchestrator | 2026-03-13 00:56:48.988598 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-03-13 00:56:48.988601 | orchestrator | Friday 13 March 2026 00:54:41 +0000 (0:00:01.287) 0:08:39.977 ********** 2026-03-13 00:56:48.988605 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:56:48.988610 | orchestrator | 2026-03-13 00:56:48.988614 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-03-13 00:56:48.988617 | orchestrator | Friday 13 March 2026 00:54:42 +0000 (0:00:01.203) 0:08:41.181 ********** 2026-03-13 00:56:48.988621 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:56:48.988625 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:56:48.988629 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:56:48.988649 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:56:48.988652 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:56:48.988656 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:56:48.988660 | orchestrator | 2026-03-13 00:56:48.988664 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-03-13 00:56:48.988667 | orchestrator | Friday 13 March 2026 00:54:43 +0000 (0:00:01.758) 0:08:42.939 ********** 2026-03-13 00:56:48.988671 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:56:48.988675 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:56:48.988679 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:56:48.988682 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:56:48.988689 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:56:48.988695 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:56:48.988703 | orchestrator | 2026-03-13 00:56:48.988712 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-03-13 00:56:48.988718 | orchestrator | Friday 13 March 2026 00:54:47 +0000 (0:00:03.364) 0:08:46.304 ********** 2026-03-13 00:56:48.988724 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:56:48.988730 | orchestrator | 2026-03-13 00:56:48.988736 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-03-13 00:56:48.988742 | orchestrator | Friday 13 March 2026 00:54:48 +0000 (0:00:01.191) 0:08:47.496 ********** 2026-03-13 00:56:48.988748 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.988755 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.988761 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.988768 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.988775 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.988781 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.988788 | orchestrator | 2026-03-13 00:56:48.988794 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-03-13 00:56:48.988805 | orchestrator | Friday 13 March 2026 00:54:49 +0000 (0:00:00.849) 0:08:48.345 ********** 2026-03-13 00:56:48.988811 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:56:48.988817 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:56:48.988823 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:56:48.988829 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:56:48.988835 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:56:48.988841 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:56:48.988847 | orchestrator | 2026-03-13 00:56:48.988852 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-03-13 00:56:48.988858 | orchestrator | Friday 13 March 2026 00:54:52 +0000 (0:00:03.135) 0:08:51.480 ********** 2026-03-13 00:56:48.988864 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.988870 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.988919 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.988923 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:48.988927 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:48.988931 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:48.988934 | orchestrator | 2026-03-13 00:56:48.988938 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-03-13 00:56:48.988942 | orchestrator | 2026-03-13 00:56:48.988946 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-13 00:56:48.988949 | orchestrator | Friday 13 March 2026 00:54:53 +0000 (0:00:00.901) 0:08:52.381 ********** 2026-03-13 00:56:48.988954 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-13 00:56:48.988957 | orchestrator | 2026-03-13 00:56:48.988961 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-13 00:56:48.988965 | orchestrator | Friday 13 March 2026 00:54:53 +0000 (0:00:00.426) 0:08:52.807 ********** 2026-03-13 00:56:48.988969 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-13 00:56:48.988973 | orchestrator | 2026-03-13 00:56:48.988979 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-13 00:56:48.988983 | orchestrator | Friday 13 March 2026 00:54:54 +0000 (0:00:00.745) 0:08:53.553 ********** 2026-03-13 00:56:48.988987 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.988990 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.988994 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.988998 | orchestrator | 2026-03-13 00:56:48.989002 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-13 00:56:48.989005 | orchestrator | Friday 13 March 2026 00:54:54 +0000 (0:00:00.311) 0:08:53.865 ********** 2026-03-13 00:56:48.989009 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.989013 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.989016 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.989020 | orchestrator | 2026-03-13 00:56:48.989024 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-13 00:56:48.989028 | orchestrator | Friday 13 March 2026 00:54:55 +0000 (0:00:00.639) 0:08:54.504 ********** 2026-03-13 00:56:48.989031 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.989035 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.989039 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.989042 | orchestrator | 2026-03-13 00:56:48.989046 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-13 00:56:48.989050 | orchestrator | Friday 13 March 2026 00:54:56 +0000 (0:00:00.917) 0:08:55.422 ********** 2026-03-13 00:56:48.989053 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.989057 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.989061 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.989065 | orchestrator | 2026-03-13 00:56:48.989068 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-13 00:56:48.989072 | orchestrator | Friday 13 March 2026 00:54:57 +0000 (0:00:00.712) 0:08:56.135 ********** 2026-03-13 00:56:48.989076 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.989080 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.989083 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.989087 | orchestrator | 2026-03-13 00:56:48.989091 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-13 00:56:48.989095 | orchestrator | Friday 13 March 2026 00:54:57 +0000 (0:00:00.322) 0:08:56.457 ********** 2026-03-13 00:56:48.989098 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.989102 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.989106 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.989109 | orchestrator | 2026-03-13 00:56:48.989113 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-13 00:56:48.989117 | orchestrator | Friday 13 March 2026 00:54:57 +0000 (0:00:00.267) 0:08:56.725 ********** 2026-03-13 00:56:48.989124 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.989128 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.989132 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.989135 | orchestrator | 2026-03-13 00:56:48.989139 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-13 00:56:48.989143 | orchestrator | Friday 13 March 2026 00:54:58 +0000 (0:00:00.460) 0:08:57.185 ********** 2026-03-13 00:56:48.989146 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.989150 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.989154 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.989157 | orchestrator | 2026-03-13 00:56:48.989161 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-13 00:56:48.989165 | orchestrator | Friday 13 March 2026 00:54:58 +0000 (0:00:00.714) 0:08:57.899 ********** 2026-03-13 00:56:48.989169 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.989172 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.989176 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.989180 | orchestrator | 2026-03-13 00:56:48.989183 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-13 00:56:48.989187 | orchestrator | Friday 13 March 2026 00:54:59 +0000 (0:00:00.631) 0:08:58.531 ********** 2026-03-13 00:56:48.989191 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.989195 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.989199 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.989202 | orchestrator | 2026-03-13 00:56:48.989206 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-13 00:56:48.989213 | orchestrator | Friday 13 March 2026 00:54:59 +0000 (0:00:00.298) 0:08:58.829 ********** 2026-03-13 00:56:48.989217 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.989220 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.989224 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.989228 | orchestrator | 2026-03-13 00:56:48.989232 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-13 00:56:48.989235 | orchestrator | Friday 13 March 2026 00:55:00 +0000 (0:00:00.594) 0:08:59.424 ********** 2026-03-13 00:56:48.989239 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.989243 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.989246 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.989250 | orchestrator | 2026-03-13 00:56:48.989254 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-13 00:56:48.989257 | orchestrator | Friday 13 March 2026 00:55:00 +0000 (0:00:00.373) 0:08:59.797 ********** 2026-03-13 00:56:48.989261 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.989265 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.989269 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.989272 | orchestrator | 2026-03-13 00:56:48.989276 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-13 00:56:48.989280 | orchestrator | Friday 13 March 2026 00:55:01 +0000 (0:00:00.354) 0:09:00.152 ********** 2026-03-13 00:56:48.989283 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.989287 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.989291 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.989294 | orchestrator | 2026-03-13 00:56:48.989298 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-13 00:56:48.989302 | orchestrator | Friday 13 March 2026 00:55:01 +0000 (0:00:00.324) 0:09:00.476 ********** 2026-03-13 00:56:48.989306 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.989309 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.989313 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.989317 | orchestrator | 2026-03-13 00:56:48.989320 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-13 00:56:48.989333 | orchestrator | Friday 13 March 2026 00:55:02 +0000 (0:00:00.575) 0:09:01.052 ********** 2026-03-13 00:56:48.989337 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.989341 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.989350 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.989354 | orchestrator | 2026-03-13 00:56:48.989358 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-13 00:56:48.989362 | orchestrator | Friday 13 March 2026 00:55:02 +0000 (0:00:00.321) 0:09:01.373 ********** 2026-03-13 00:56:48.989365 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.989371 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.989380 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.989387 | orchestrator | 2026-03-13 00:56:48.989393 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-13 00:56:48.989400 | orchestrator | Friday 13 March 2026 00:55:02 +0000 (0:00:00.308) 0:09:01.682 ********** 2026-03-13 00:56:48.989406 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.989412 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.989418 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.989425 | orchestrator | 2026-03-13 00:56:48.989431 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-13 00:56:48.989437 | orchestrator | Friday 13 March 2026 00:55:03 +0000 (0:00:00.366) 0:09:02.048 ********** 2026-03-13 00:56:48.989443 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.989449 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.989453 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.989457 | orchestrator | 2026-03-13 00:56:48.989461 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-03-13 00:56:48.989464 | orchestrator | Friday 13 March 2026 00:55:03 +0000 (0:00:00.782) 0:09:02.831 ********** 2026-03-13 00:56:48.989468 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.989472 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.989475 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-03-13 00:56:48.989479 | orchestrator | 2026-03-13 00:56:48.989483 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-03-13 00:56:48.989486 | orchestrator | Friday 13 March 2026 00:55:04 +0000 (0:00:00.376) 0:09:03.208 ********** 2026-03-13 00:56:48.989490 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-13 00:56:48.989494 | orchestrator | 2026-03-13 00:56:48.989497 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-03-13 00:56:48.989501 | orchestrator | Friday 13 March 2026 00:55:06 +0000 (0:00:02.483) 0:09:05.691 ********** 2026-03-13 00:56:48.989507 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-03-13 00:56:48.989512 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.989516 | orchestrator | 2026-03-13 00:56:48.989519 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-03-13 00:56:48.989523 | orchestrator | Friday 13 March 2026 00:55:07 +0000 (0:00:00.269) 0:09:05.960 ********** 2026-03-13 00:56:48.989528 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-13 00:56:48.989536 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-13 00:56:48.989540 | orchestrator | 2026-03-13 00:56:48.989547 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-03-13 00:56:48.989551 | orchestrator | Friday 13 March 2026 00:55:14 +0000 (0:00:07.416) 0:09:13.377 ********** 2026-03-13 00:56:48.989555 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-13 00:56:48.989558 | orchestrator | 2026-03-13 00:56:48.989562 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-03-13 00:56:48.989569 | orchestrator | Friday 13 March 2026 00:55:18 +0000 (0:00:04.053) 0:09:17.431 ********** 2026-03-13 00:56:48.989572 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-13 00:56:48.989576 | orchestrator | 2026-03-13 00:56:48.989580 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-03-13 00:56:48.989584 | orchestrator | Friday 13 March 2026 00:55:19 +0000 (0:00:00.521) 0:09:17.952 ********** 2026-03-13 00:56:48.989587 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-13 00:56:48.989591 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-13 00:56:48.989595 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-13 00:56:48.989598 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-03-13 00:56:48.989602 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-03-13 00:56:48.989606 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-03-13 00:56:48.989609 | orchestrator | 2026-03-13 00:56:48.989613 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-03-13 00:56:48.989617 | orchestrator | Friday 13 March 2026 00:55:19 +0000 (0:00:00.977) 0:09:18.930 ********** 2026-03-13 00:56:48.989620 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-13 00:56:48.989624 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-13 00:56:48.989628 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-13 00:56:48.989647 | orchestrator | 2026-03-13 00:56:48.989652 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-03-13 00:56:48.989656 | orchestrator | Friday 13 March 2026 00:55:22 +0000 (0:00:02.342) 0:09:21.272 ********** 2026-03-13 00:56:48.989660 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-13 00:56:48.989664 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-13 00:56:48.989668 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:56:48.989671 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-13 00:56:48.989675 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-13 00:56:48.989679 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:56:48.989683 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-13 00:56:48.989686 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-13 00:56:48.989690 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:56:48.989694 | orchestrator | 2026-03-13 00:56:48.989697 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-03-13 00:56:48.989701 | orchestrator | Friday 13 March 2026 00:55:23 +0000 (0:00:01.639) 0:09:22.911 ********** 2026-03-13 00:56:48.989705 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:56:48.989709 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:56:48.989712 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:56:48.989716 | orchestrator | 2026-03-13 00:56:48.989719 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-03-13 00:56:48.989723 | orchestrator | Friday 13 March 2026 00:55:26 +0000 (0:00:02.833) 0:09:25.745 ********** 2026-03-13 00:56:48.989727 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.989730 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.989734 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.989738 | orchestrator | 2026-03-13 00:56:48.989741 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-03-13 00:56:48.989745 | orchestrator | Friday 13 March 2026 00:55:27 +0000 (0:00:00.331) 0:09:26.077 ********** 2026-03-13 00:56:48.989749 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-13 00:56:48.989753 | orchestrator | 2026-03-13 00:56:48.989756 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-03-13 00:56:48.989762 | orchestrator | Friday 13 March 2026 00:55:27 +0000 (0:00:00.770) 0:09:26.847 ********** 2026-03-13 00:56:48.989785 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-13 00:56:48.989790 | orchestrator | 2026-03-13 00:56:48.989793 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-03-13 00:56:48.989797 | orchestrator | Friday 13 March 2026 00:55:28 +0000 (0:00:00.591) 0:09:27.439 ********** 2026-03-13 00:56:48.989801 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:56:48.989804 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:56:48.989808 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:56:48.989812 | orchestrator | 2026-03-13 00:56:48.989815 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-03-13 00:56:48.989819 | orchestrator | Friday 13 March 2026 00:55:29 +0000 (0:00:01.126) 0:09:28.565 ********** 2026-03-13 00:56:48.989823 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:56:48.989826 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:56:48.989830 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:56:48.989833 | orchestrator | 2026-03-13 00:56:48.989837 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-03-13 00:56:48.989841 | orchestrator | Friday 13 March 2026 00:55:30 +0000 (0:00:01.285) 0:09:29.851 ********** 2026-03-13 00:56:48.989844 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:56:48.989848 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:56:48.989852 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:56:48.989855 | orchestrator | 2026-03-13 00:56:48.989859 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-03-13 00:56:48.989866 | orchestrator | Friday 13 March 2026 00:55:32 +0000 (0:00:01.894) 0:09:31.746 ********** 2026-03-13 00:56:48.989869 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:56:48.989873 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:56:48.989877 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:56:48.989880 | orchestrator | 2026-03-13 00:56:48.989884 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-03-13 00:56:48.989889 | orchestrator | Friday 13 March 2026 00:55:35 +0000 (0:00:02.272) 0:09:34.018 ********** 2026-03-13 00:56:48.989895 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.989901 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.989904 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.989908 | orchestrator | 2026-03-13 00:56:48.989912 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-13 00:56:48.989916 | orchestrator | Friday 13 March 2026 00:55:36 +0000 (0:00:01.392) 0:09:35.411 ********** 2026-03-13 00:56:48.989919 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:56:48.989923 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:56:48.989927 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:56:48.989930 | orchestrator | 2026-03-13 00:56:48.989934 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-13 00:56:48.989938 | orchestrator | Friday 13 March 2026 00:55:37 +0000 (0:00:00.605) 0:09:36.017 ********** 2026-03-13 00:56:48.989941 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-13 00:56:48.989945 | orchestrator | 2026-03-13 00:56:48.989949 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-13 00:56:48.989952 | orchestrator | Friday 13 March 2026 00:55:37 +0000 (0:00:00.770) 0:09:36.787 ********** 2026-03-13 00:56:48.989963 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.989967 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.989971 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.989974 | orchestrator | 2026-03-13 00:56:48.989978 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-13 00:56:48.989984 | orchestrator | Friday 13 March 2026 00:55:38 +0000 (0:00:00.369) 0:09:37.157 ********** 2026-03-13 00:56:48.989988 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:56:48.989994 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:56:48.989998 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:56:48.990001 | orchestrator | 2026-03-13 00:56:48.990005 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-13 00:56:48.990009 | orchestrator | Friday 13 March 2026 00:55:39 +0000 (0:00:01.143) 0:09:38.300 ********** 2026-03-13 00:56:48.990036 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-13 00:56:48.990041 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-13 00:56:48.990045 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-13 00:56:48.990049 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.990052 | orchestrator | 2026-03-13 00:56:48.990056 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-13 00:56:48.990060 | orchestrator | Friday 13 March 2026 00:55:40 +0000 (0:00:00.723) 0:09:39.024 ********** 2026-03-13 00:56:48.990064 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.990067 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.990071 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.990075 | orchestrator | 2026-03-13 00:56:48.990079 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-13 00:56:48.990082 | orchestrator | 2026-03-13 00:56:48.990086 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-13 00:56:48.990090 | orchestrator | Friday 13 March 2026 00:55:40 +0000 (0:00:00.705) 0:09:39.730 ********** 2026-03-13 00:56:48.990093 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-13 00:56:48.990097 | orchestrator | 2026-03-13 00:56:48.990101 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-13 00:56:48.990105 | orchestrator | Friday 13 March 2026 00:55:41 +0000 (0:00:00.487) 0:09:40.217 ********** 2026-03-13 00:56:48.990108 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-13 00:56:48.990112 | orchestrator | 2026-03-13 00:56:48.990116 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-13 00:56:48.990120 | orchestrator | Friday 13 March 2026 00:55:42 +0000 (0:00:01.066) 0:09:41.283 ********** 2026-03-13 00:56:48.990123 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.990127 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.990131 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.990134 | orchestrator | 2026-03-13 00:56:48.990138 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-13 00:56:48.990142 | orchestrator | Friday 13 March 2026 00:55:42 +0000 (0:00:00.569) 0:09:41.852 ********** 2026-03-13 00:56:48.990145 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.990149 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.990153 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.990157 | orchestrator | 2026-03-13 00:56:48.990160 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-13 00:56:48.990164 | orchestrator | Friday 13 March 2026 00:55:43 +0000 (0:00:00.743) 0:09:42.596 ********** 2026-03-13 00:56:48.990168 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.990172 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.990175 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.990179 | orchestrator | 2026-03-13 00:56:48.990183 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-13 00:56:48.990187 | orchestrator | Friday 13 March 2026 00:55:44 +0000 (0:00:00.780) 0:09:43.376 ********** 2026-03-13 00:56:48.990190 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.990194 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.990198 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.990201 | orchestrator | 2026-03-13 00:56:48.990205 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-13 00:56:48.990209 | orchestrator | Friday 13 March 2026 00:55:45 +0000 (0:00:00.650) 0:09:44.027 ********** 2026-03-13 00:56:48.990216 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.990222 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.990226 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.990230 | orchestrator | 2026-03-13 00:56:48.990234 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-13 00:56:48.990237 | orchestrator | Friday 13 March 2026 00:55:45 +0000 (0:00:00.266) 0:09:44.294 ********** 2026-03-13 00:56:48.990241 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.990245 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.990249 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.990252 | orchestrator | 2026-03-13 00:56:48.990256 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-13 00:56:48.990260 | orchestrator | Friday 13 March 2026 00:55:45 +0000 (0:00:00.267) 0:09:44.561 ********** 2026-03-13 00:56:48.990264 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.990267 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.990271 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.990275 | orchestrator | 2026-03-13 00:56:48.990278 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-13 00:56:48.990282 | orchestrator | Friday 13 March 2026 00:55:46 +0000 (0:00:00.419) 0:09:44.981 ********** 2026-03-13 00:56:48.990286 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.990290 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.990293 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.990297 | orchestrator | 2026-03-13 00:56:48.990301 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-13 00:56:48.990305 | orchestrator | Friday 13 March 2026 00:55:46 +0000 (0:00:00.664) 0:09:45.646 ********** 2026-03-13 00:56:48.990308 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.990312 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.990316 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.990320 | orchestrator | 2026-03-13 00:56:48.990323 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-13 00:56:48.990327 | orchestrator | Friday 13 March 2026 00:55:47 +0000 (0:00:00.660) 0:09:46.306 ********** 2026-03-13 00:56:48.990331 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.990337 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.990341 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.990344 | orchestrator | 2026-03-13 00:56:48.990348 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-13 00:56:48.990352 | orchestrator | Friday 13 March 2026 00:55:47 +0000 (0:00:00.266) 0:09:46.572 ********** 2026-03-13 00:56:48.990355 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.990359 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.990363 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.990367 | orchestrator | 2026-03-13 00:56:48.990370 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-13 00:56:48.990374 | orchestrator | Friday 13 March 2026 00:55:48 +0000 (0:00:00.385) 0:09:46.958 ********** 2026-03-13 00:56:48.990378 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.990382 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.990385 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.990389 | orchestrator | 2026-03-13 00:56:48.990393 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-13 00:56:48.990396 | orchestrator | Friday 13 March 2026 00:55:48 +0000 (0:00:00.266) 0:09:47.224 ********** 2026-03-13 00:56:48.990400 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.990404 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.990408 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.990411 | orchestrator | 2026-03-13 00:56:48.990415 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-13 00:56:48.990419 | orchestrator | Friday 13 March 2026 00:55:48 +0000 (0:00:00.273) 0:09:47.498 ********** 2026-03-13 00:56:48.990423 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.990429 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.990433 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.990436 | orchestrator | 2026-03-13 00:56:48.990440 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-13 00:56:48.990444 | orchestrator | Friday 13 March 2026 00:55:48 +0000 (0:00:00.280) 0:09:47.779 ********** 2026-03-13 00:56:48.990447 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.990451 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.990455 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.990459 | orchestrator | 2026-03-13 00:56:48.990462 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-13 00:56:48.990466 | orchestrator | Friday 13 March 2026 00:55:49 +0000 (0:00:00.418) 0:09:48.198 ********** 2026-03-13 00:56:48.990470 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.990474 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.990477 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.990481 | orchestrator | 2026-03-13 00:56:48.990485 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-13 00:56:48.990488 | orchestrator | Friday 13 March 2026 00:55:49 +0000 (0:00:00.275) 0:09:48.473 ********** 2026-03-13 00:56:48.990492 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.990496 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.990500 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.990503 | orchestrator | 2026-03-13 00:56:48.990507 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-13 00:56:48.990512 | orchestrator | Friday 13 March 2026 00:55:49 +0000 (0:00:00.255) 0:09:48.728 ********** 2026-03-13 00:56:48.990518 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.990524 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.990533 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.990540 | orchestrator | 2026-03-13 00:56:48.990547 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-13 00:56:48.990553 | orchestrator | Friday 13 March 2026 00:55:50 +0000 (0:00:00.273) 0:09:49.002 ********** 2026-03-13 00:56:48.990559 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.990565 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.990570 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.990575 | orchestrator | 2026-03-13 00:56:48.990581 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-03-13 00:56:48.990586 | orchestrator | Friday 13 March 2026 00:55:50 +0000 (0:00:00.619) 0:09:49.622 ********** 2026-03-13 00:56:48.990597 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-13 00:56:48.990603 | orchestrator | 2026-03-13 00:56:48.990609 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-13 00:56:48.990615 | orchestrator | Friday 13 March 2026 00:55:51 +0000 (0:00:00.418) 0:09:50.040 ********** 2026-03-13 00:56:48.990620 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-13 00:56:48.990626 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-13 00:56:48.990660 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-13 00:56:48.990666 | orchestrator | 2026-03-13 00:56:48.990673 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-13 00:56:48.990678 | orchestrator | Friday 13 March 2026 00:55:53 +0000 (0:00:02.531) 0:09:52.572 ********** 2026-03-13 00:56:48.990683 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-13 00:56:48.990689 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-13 00:56:48.990694 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:56:48.990700 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-13 00:56:48.990705 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-13 00:56:48.990712 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:56:48.990716 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-13 00:56:48.990724 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-13 00:56:48.990728 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:56:48.990732 | orchestrator | 2026-03-13 00:56:48.990736 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-03-13 00:56:48.990739 | orchestrator | Friday 13 March 2026 00:55:55 +0000 (0:00:01.560) 0:09:54.132 ********** 2026-03-13 00:56:48.990743 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.990747 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.990750 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.990754 | orchestrator | 2026-03-13 00:56:48.990758 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-03-13 00:56:48.990764 | orchestrator | Friday 13 March 2026 00:55:55 +0000 (0:00:00.326) 0:09:54.459 ********** 2026-03-13 00:56:48.990768 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-13 00:56:48.990772 | orchestrator | 2026-03-13 00:56:48.990776 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-03-13 00:56:48.990780 | orchestrator | Friday 13 March 2026 00:55:56 +0000 (0:00:00.547) 0:09:55.006 ********** 2026-03-13 00:56:48.990783 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-13 00:56:48.990788 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-13 00:56:48.990792 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-13 00:56:48.990796 | orchestrator | 2026-03-13 00:56:48.990799 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-03-13 00:56:48.990803 | orchestrator | Friday 13 March 2026 00:55:57 +0000 (0:00:01.294) 0:09:56.301 ********** 2026-03-13 00:56:48.990807 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-13 00:56:48.990810 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-13 00:56:48.990814 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-13 00:56:48.990818 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-13 00:56:48.990822 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-13 00:56:48.990825 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-13 00:56:48.990829 | orchestrator | 2026-03-13 00:56:48.990833 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-13 00:56:48.990837 | orchestrator | Friday 13 March 2026 00:56:01 +0000 (0:00:04.060) 0:10:00.362 ********** 2026-03-13 00:56:48.990840 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-13 00:56:48.990844 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-13 00:56:48.990848 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-13 00:56:48.990851 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-13 00:56:48.990855 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-13 00:56:48.990859 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-13 00:56:48.990863 | orchestrator | 2026-03-13 00:56:48.990866 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-13 00:56:48.990870 | orchestrator | Friday 13 March 2026 00:56:03 +0000 (0:00:02.032) 0:10:02.394 ********** 2026-03-13 00:56:48.990874 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-13 00:56:48.990880 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:56:48.990884 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-13 00:56:48.990887 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:56:48.990891 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-13 00:56:48.990895 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:56:48.990899 | orchestrator | 2026-03-13 00:56:48.990905 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-03-13 00:56:48.990909 | orchestrator | Friday 13 March 2026 00:56:04 +0000 (0:00:01.284) 0:10:03.678 ********** 2026-03-13 00:56:48.990913 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-03-13 00:56:48.990917 | orchestrator | 2026-03-13 00:56:48.990921 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-03-13 00:56:48.990924 | orchestrator | Friday 13 March 2026 00:56:04 +0000 (0:00:00.216) 0:10:03.895 ********** 2026-03-13 00:56:48.990928 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-13 00:56:48.990932 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-13 00:56:48.990936 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-13 00:56:48.990940 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-13 00:56:48.990944 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-13 00:56:48.990948 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.990951 | orchestrator | 2026-03-13 00:56:48.990955 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-03-13 00:56:48.990959 | orchestrator | Friday 13 March 2026 00:56:05 +0000 (0:00:01.053) 0:10:04.948 ********** 2026-03-13 00:56:48.990964 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-13 00:56:48.990969 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-13 00:56:48.990972 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-13 00:56:48.990976 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-13 00:56:48.990980 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-13 00:56:48.990984 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.990987 | orchestrator | 2026-03-13 00:56:48.990991 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-03-13 00:56:48.990995 | orchestrator | Friday 13 March 2026 00:56:06 +0000 (0:00:00.569) 0:10:05.518 ********** 2026-03-13 00:56:48.990999 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-13 00:56:48.991003 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-13 00:56:48.991006 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-13 00:56:48.991010 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-13 00:56:48.991016 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-13 00:56:48.991020 | orchestrator | 2026-03-13 00:56:48.991024 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-03-13 00:56:48.991028 | orchestrator | Friday 13 March 2026 00:56:37 +0000 (0:00:30.592) 0:10:36.110 ********** 2026-03-13 00:56:48.991031 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.991035 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.991039 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.991043 | orchestrator | 2026-03-13 00:56:48.991047 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-03-13 00:56:48.991050 | orchestrator | Friday 13 March 2026 00:56:37 +0000 (0:00:00.272) 0:10:36.383 ********** 2026-03-13 00:56:48.991054 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.991058 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.991061 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.991065 | orchestrator | 2026-03-13 00:56:48.991069 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-03-13 00:56:48.991073 | orchestrator | Friday 13 March 2026 00:56:37 +0000 (0:00:00.270) 0:10:36.654 ********** 2026-03-13 00:56:48.991076 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-13 00:56:48.991080 | orchestrator | 2026-03-13 00:56:48.991084 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-03-13 00:56:48.991088 | orchestrator | Friday 13 March 2026 00:56:38 +0000 (0:00:00.634) 0:10:37.288 ********** 2026-03-13 00:56:48.991094 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-13 00:56:48.991098 | orchestrator | 2026-03-13 00:56:48.991101 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-03-13 00:56:48.991105 | orchestrator | Friday 13 March 2026 00:56:38 +0000 (0:00:00.485) 0:10:37.774 ********** 2026-03-13 00:56:48.991109 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:56:48.991113 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:56:48.991116 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:56:48.991120 | orchestrator | 2026-03-13 00:56:48.991124 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-03-13 00:56:48.991128 | orchestrator | Friday 13 March 2026 00:56:40 +0000 (0:00:01.430) 0:10:39.204 ********** 2026-03-13 00:56:48.991131 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:56:48.991135 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:56:48.991139 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:56:48.991143 | orchestrator | 2026-03-13 00:56:48.991146 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-03-13 00:56:48.991150 | orchestrator | Friday 13 March 2026 00:56:41 +0000 (0:00:01.457) 0:10:40.662 ********** 2026-03-13 00:56:48.991154 | orchestrator | changed: [testbed-node-4] 2026-03-13 00:56:48.991158 | orchestrator | changed: [testbed-node-3] 2026-03-13 00:56:48.991193 | orchestrator | changed: [testbed-node-5] 2026-03-13 00:56:48.991197 | orchestrator | 2026-03-13 00:56:48.991201 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-03-13 00:56:48.991205 | orchestrator | Friday 13 March 2026 00:56:43 +0000 (0:00:01.862) 0:10:42.524 ********** 2026-03-13 00:56:48.991209 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-13 00:56:48.991213 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-13 00:56:48.991219 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-13 00:56:48.991223 | orchestrator | 2026-03-13 00:56:48.991227 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-13 00:56:48.991235 | orchestrator | Friday 13 March 2026 00:56:46 +0000 (0:00:02.591) 0:10:45.116 ********** 2026-03-13 00:56:48.991239 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.991243 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.991247 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.991250 | orchestrator | 2026-03-13 00:56:48.991254 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-13 00:56:48.991258 | orchestrator | Friday 13 March 2026 00:56:46 +0000 (0:00:00.304) 0:10:45.421 ********** 2026-03-13 00:56:48.991262 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-13 00:56:48.991265 | orchestrator | 2026-03-13 00:56:48.991269 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-13 00:56:48.991273 | orchestrator | Friday 13 March 2026 00:56:46 +0000 (0:00:00.465) 0:10:45.887 ********** 2026-03-13 00:56:48.991277 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.991281 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.991285 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.991292 | orchestrator | 2026-03-13 00:56:48.991301 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-13 00:56:48.991308 | orchestrator | Friday 13 March 2026 00:56:47 +0000 (0:00:00.445) 0:10:46.332 ********** 2026-03-13 00:56:48.991314 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.991320 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:56:48.991326 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:56:48.991332 | orchestrator | 2026-03-13 00:56:48.991337 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-13 00:56:48.991342 | orchestrator | Friday 13 March 2026 00:56:47 +0000 (0:00:00.294) 0:10:46.627 ********** 2026-03-13 00:56:48.991347 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-13 00:56:48.991353 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-13 00:56:48.991358 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-13 00:56:48.991363 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:56:48.991368 | orchestrator | 2026-03-13 00:56:48.991374 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-13 00:56:48.991380 | orchestrator | Friday 13 March 2026 00:56:48 +0000 (0:00:00.556) 0:10:47.183 ********** 2026-03-13 00:56:48.991386 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:56:48.991392 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:56:48.991397 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:56:48.991402 | orchestrator | 2026-03-13 00:56:48.991409 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 00:56:48.991415 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-03-13 00:56:48.991422 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-03-13 00:56:48.991429 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-03-13 00:56:48.991435 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-03-13 00:56:48.991441 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-03-13 00:56:48.991452 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-03-13 00:56:48.991458 | orchestrator | 2026-03-13 00:56:48.991465 | orchestrator | 2026-03-13 00:56:48.991471 | orchestrator | 2026-03-13 00:56:48.991477 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 00:56:48.991489 | orchestrator | Friday 13 March 2026 00:56:48 +0000 (0:00:00.211) 0:10:47.395 ********** 2026-03-13 00:56:48.991494 | orchestrator | =============================================================================== 2026-03-13 00:56:48.991501 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 47.70s 2026-03-13 00:56:48.991508 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 40.23s 2026-03-13 00:56:48.991514 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 36.08s 2026-03-13 00:56:48.991521 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.59s 2026-03-13 00:56:48.991527 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.70s 2026-03-13 00:56:48.991534 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 13.35s 2026-03-13 00:56:48.991540 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.28s 2026-03-13 00:56:48.991546 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.13s 2026-03-13 00:56:48.991550 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 8.54s 2026-03-13 00:56:48.991554 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.95s 2026-03-13 00:56:48.991558 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 7.42s 2026-03-13 00:56:48.991561 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.61s 2026-03-13 00:56:48.991568 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.18s 2026-03-13 00:56:48.991572 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 4.25s 2026-03-13 00:56:48.991575 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.06s 2026-03-13 00:56:48.991579 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 4.05s 2026-03-13 00:56:48.991583 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 3.84s 2026-03-13 00:56:48.991586 | orchestrator | ceph-config : Generate Ceph file ---------------------------------------- 3.68s 2026-03-13 00:56:48.991590 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.51s 2026-03-13 00:56:48.991594 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.36s 2026-03-13 00:56:48.991598 | orchestrator | 2026-03-13 00:56:48 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:56:52.012699 | orchestrator | 2026-03-13 00:56:52 | INFO  | Task a4bc5003-8dc1-4589-b3aa-f7d50e3a689e is in state STARTED 2026-03-13 00:56:52.013557 | orchestrator | 2026-03-13 00:56:52 | INFO  | Task 547da46c-916c-4b6d-90e8-fc067c5a6077 is in state STARTED 2026-03-13 00:56:52.016577 | orchestrator | 2026-03-13 00:56:52 | INFO  | Task 52ff5064-272a-4a3f-b0f5-a223baabdd01 is in state STARTED 2026-03-13 00:56:52.016712 | orchestrator | 2026-03-13 00:56:52 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:56:55.067530 | orchestrator | 2026-03-13 00:56:55 | INFO  | Task a4bc5003-8dc1-4589-b3aa-f7d50e3a689e is in state STARTED 2026-03-13 00:56:55.069878 | orchestrator | 2026-03-13 00:56:55 | INFO  | Task 547da46c-916c-4b6d-90e8-fc067c5a6077 is in state STARTED 2026-03-13 00:56:55.071660 | orchestrator | 2026-03-13 00:56:55 | INFO  | Task 52ff5064-272a-4a3f-b0f5-a223baabdd01 is in state STARTED 2026-03-13 00:56:55.071710 | orchestrator | 2026-03-13 00:56:55 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:56:58.106096 | orchestrator | 2026-03-13 00:56:58 | INFO  | Task a4bc5003-8dc1-4589-b3aa-f7d50e3a689e is in state STARTED 2026-03-13 00:56:58.108562 | orchestrator | 2026-03-13 00:56:58 | INFO  | Task 547da46c-916c-4b6d-90e8-fc067c5a6077 is in state SUCCESS 2026-03-13 00:56:58.110290 | orchestrator | 2026-03-13 00:56:58.110338 | orchestrator | 2026-03-13 00:56:58.110372 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-13 00:56:58.110384 | orchestrator | 2026-03-13 00:56:58.110394 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-13 00:56:58.110404 | orchestrator | Friday 13 March 2026 00:54:25 +0000 (0:00:00.201) 0:00:00.201 ********** 2026-03-13 00:56:58.110415 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:58.110422 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:56:58.110434 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:56:58.110441 | orchestrator | 2026-03-13 00:56:58.110447 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-13 00:56:58.110452 | orchestrator | Friday 13 March 2026 00:54:25 +0000 (0:00:00.248) 0:00:00.450 ********** 2026-03-13 00:56:58.110459 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-03-13 00:56:58.110465 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-03-13 00:56:58.110471 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-03-13 00:56:58.110477 | orchestrator | 2026-03-13 00:56:58.110560 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-03-13 00:56:58.110569 | orchestrator | 2026-03-13 00:56:58.110587 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-13 00:56:58.110605 | orchestrator | Friday 13 March 2026 00:54:25 +0000 (0:00:00.382) 0:00:00.832 ********** 2026-03-13 00:56:58.110616 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:56:58.110758 | orchestrator | 2026-03-13 00:56:58.110766 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-03-13 00:56:58.110772 | orchestrator | Friday 13 March 2026 00:54:26 +0000 (0:00:00.459) 0:00:01.292 ********** 2026-03-13 00:56:58.110778 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-13 00:56:58.110784 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-13 00:56:58.110789 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-13 00:56:58.110795 | orchestrator | 2026-03-13 00:56:58.110801 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-03-13 00:56:58.110807 | orchestrator | Friday 13 March 2026 00:54:26 +0000 (0:00:00.606) 0:00:01.899 ********** 2026-03-13 00:56:58.110824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-13 00:56:58.110833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-13 00:56:58.110857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-13 00:56:58.110865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-13 00:56:58.110872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-13 00:56:58.110883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-13 00:56:58.110892 | orchestrator | 2026-03-13 00:56:58.110899 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-13 00:56:58.110905 | orchestrator | Friday 13 March 2026 00:54:28 +0000 (0:00:01.627) 0:00:03.526 ********** 2026-03-13 00:56:58.110911 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:56:58.110917 | orchestrator | 2026-03-13 00:56:58.110923 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-03-13 00:56:58.110928 | orchestrator | Friday 13 March 2026 00:54:29 +0000 (0:00:00.614) 0:00:04.141 ********** 2026-03-13 00:56:58.110942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-13 00:56:58.110949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-13 00:56:58.110955 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-13 00:56:58.110964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-13 00:56:58.110978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-13 00:56:58.110985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-13 00:56:58.110992 | orchestrator | 2026-03-13 00:56:58.111076 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-03-13 00:56:58.111083 | orchestrator | Friday 13 March 2026 00:54:31 +0000 (0:00:02.837) 0:00:06.979 ********** 2026-03-13 00:56:58.111089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-13 00:56:58.111099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-13 00:56:58.111110 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:58.111116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-13 00:56:58.111130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-13 00:56:58.111137 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:58.111143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-13 00:56:58.111152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-13 00:56:58.111162 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:58.111168 | orchestrator | 2026-03-13 00:56:58.111174 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-03-13 00:56:58.111180 | orchestrator | Friday 13 March 2026 00:54:32 +0000 (0:00:00.998) 0:00:07.977 ********** 2026-03-13 00:56:58.111186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-13 00:56:58.111196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-13 00:56:58.111203 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:58.111209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-13 00:56:58.111218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-13 00:56:58.111227 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:58.111233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-13 00:56:58.111244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-13 00:56:58.111250 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:58.111256 | orchestrator | 2026-03-13 00:56:58.111262 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-03-13 00:56:58.111268 | orchestrator | Friday 13 March 2026 00:54:33 +0000 (0:00:00.680) 0:00:08.658 ********** 2026-03-13 00:56:58.111274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-13 00:56:58.111288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-13 00:56:58.111297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-13 00:56:58.111308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-13 00:56:58.111315 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-13 00:56:58.111324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-13 00:56:58.111334 | orchestrator | 2026-03-13 00:56:58.111340 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-03-13 00:56:58.111346 | orchestrator | Friday 13 March 2026 00:54:35 +0000 (0:00:02.119) 0:00:10.778 ********** 2026-03-13 00:56:58.111352 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:56:58.111358 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:56:58.111364 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:56:58.111370 | orchestrator | 2026-03-13 00:56:58.111376 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-03-13 00:56:58.111381 | orchestrator | Friday 13 March 2026 00:54:38 +0000 (0:00:02.396) 0:00:13.174 ********** 2026-03-13 00:56:58.111387 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:56:58.111393 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:56:58.111398 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:56:58.111406 | orchestrator | 2026-03-13 00:56:58.111416 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-03-13 00:56:58.111439 | orchestrator | Friday 13 March 2026 00:54:40 +0000 (0:00:02.289) 0:00:15.463 ********** 2026-03-13 00:56:58.111451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-13 00:56:58.111467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-13 00:56:58.111478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-13 00:56:58.111499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-13 00:56:58.111511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-13 00:56:58.111528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-13 00:56:58.111539 | orchestrator | 2026-03-13 00:56:58.111549 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-13 00:56:58.111559 | orchestrator | Friday 13 March 2026 00:54:42 +0000 (0:00:02.293) 0:00:17.757 ********** 2026-03-13 00:56:58.111569 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:58.111575 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:56:58.111581 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:56:58.111587 | orchestrator | 2026-03-13 00:56:58.111593 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-13 00:56:58.111604 | orchestrator | Friday 13 March 2026 00:54:42 +0000 (0:00:00.296) 0:00:18.053 ********** 2026-03-13 00:56:58.111609 | orchestrator | 2026-03-13 00:56:58.111615 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-13 00:56:58.111640 | orchestrator | Friday 13 March 2026 00:54:43 +0000 (0:00:00.106) 0:00:18.159 ********** 2026-03-13 00:56:58.111646 | orchestrator | 2026-03-13 00:56:58.111652 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-13 00:56:58.111658 | orchestrator | Friday 13 March 2026 00:54:43 +0000 (0:00:00.065) 0:00:18.225 ********** 2026-03-13 00:56:58.111663 | orchestrator | 2026-03-13 00:56:58.111672 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-03-13 00:56:58.111682 | orchestrator | Friday 13 March 2026 00:54:43 +0000 (0:00:00.075) 0:00:18.300 ********** 2026-03-13 00:56:58.111692 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:58.111701 | orchestrator | 2026-03-13 00:56:58.111711 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-03-13 00:56:58.111721 | orchestrator | Friday 13 March 2026 00:54:43 +0000 (0:00:00.657) 0:00:18.958 ********** 2026-03-13 00:56:58.111730 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:56:58.111738 | orchestrator | 2026-03-13 00:56:58.111747 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-03-13 00:56:58.111757 | orchestrator | Friday 13 March 2026 00:54:44 +0000 (0:00:00.271) 0:00:19.229 ********** 2026-03-13 00:56:58.111767 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:56:58.111777 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:56:58.111788 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:56:58.111799 | orchestrator | 2026-03-13 00:56:58.111809 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-03-13 00:56:58.111825 | orchestrator | Friday 13 March 2026 00:55:37 +0000 (0:00:52.936) 0:01:12.166 ********** 2026-03-13 00:56:58.111832 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:56:58.111839 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:56:58.111846 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:56:58.111852 | orchestrator | 2026-03-13 00:56:58.111859 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-13 00:56:58.111866 | orchestrator | Friday 13 March 2026 00:56:43 +0000 (0:01:06.776) 0:02:18.943 ********** 2026-03-13 00:56:58.111872 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:56:58.111879 | orchestrator | 2026-03-13 00:56:58.111885 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-03-13 00:56:58.111892 | orchestrator | Friday 13 March 2026 00:56:44 +0000 (0:00:00.657) 0:02:19.600 ********** 2026-03-13 00:56:58.111899 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:58.111906 | orchestrator | 2026-03-13 00:56:58.111913 | orchestrator | TASK [opensearch : Wait for OpenSearch cluster to become healthy] ************** 2026-03-13 00:56:58.111919 | orchestrator | Friday 13 March 2026 00:56:47 +0000 (0:00:02.987) 0:02:22.587 ********** 2026-03-13 00:56:58.111926 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:58.111933 | orchestrator | 2026-03-13 00:56:58.111940 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-03-13 00:56:58.111946 | orchestrator | Friday 13 March 2026 00:56:49 +0000 (0:00:02.424) 0:02:25.012 ********** 2026-03-13 00:56:58.111953 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:56:58.111960 | orchestrator | 2026-03-13 00:56:58.111967 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-03-13 00:56:58.111973 | orchestrator | Friday 13 March 2026 00:56:52 +0000 (0:00:02.119) 0:02:27.132 ********** 2026-03-13 00:56:58.111981 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:56:58.111987 | orchestrator | 2026-03-13 00:56:58.111994 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-03-13 00:56:58.112001 | orchestrator | Friday 13 March 2026 00:56:54 +0000 (0:00:02.622) 0:02:29.755 ********** 2026-03-13 00:56:58.112012 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:56:58.112019 | orchestrator | 2026-03-13 00:56:58.112026 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 00:56:58.112033 | orchestrator | testbed-node-0 : ok=19  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-13 00:56:58.112040 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-13 00:56:58.112051 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-13 00:56:58.112057 | orchestrator | 2026-03-13 00:56:58.112063 | orchestrator | 2026-03-13 00:56:58.112068 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 00:56:58.112074 | orchestrator | Friday 13 March 2026 00:56:57 +0000 (0:00:02.454) 0:02:32.209 ********** 2026-03-13 00:56:58.112080 | orchestrator | =============================================================================== 2026-03-13 00:56:58.112086 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 66.78s 2026-03-13 00:56:58.112092 | orchestrator | opensearch : Restart opensearch container ------------------------------ 52.94s 2026-03-13 00:56:58.112097 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.99s 2026-03-13 00:56:58.112103 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.84s 2026-03-13 00:56:58.112109 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.62s 2026-03-13 00:56:58.112114 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.45s 2026-03-13 00:56:58.112120 | orchestrator | opensearch : Wait for OpenSearch cluster to become healthy -------------- 2.42s 2026-03-13 00:56:58.112126 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.40s 2026-03-13 00:56:58.112131 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.29s 2026-03-13 00:56:58.112137 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.29s 2026-03-13 00:56:58.112143 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.12s 2026-03-13 00:56:58.112148 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.12s 2026-03-13 00:56:58.112154 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.63s 2026-03-13 00:56:58.112160 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.00s 2026-03-13 00:56:58.112165 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.68s 2026-03-13 00:56:58.112171 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 0.66s 2026-03-13 00:56:58.112177 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.66s 2026-03-13 00:56:58.112183 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.61s 2026-03-13 00:56:58.112188 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.61s 2026-03-13 00:56:58.112194 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.46s 2026-03-13 00:56:58.112200 | orchestrator | 2026-03-13 00:56:58 | INFO  | Task 52ff5064-272a-4a3f-b0f5-a223baabdd01 is in state STARTED 2026-03-13 00:56:58.112206 | orchestrator | 2026-03-13 00:56:58 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:57:01.147596 | orchestrator | 2026-03-13 00:57:01 | INFO  | Task a4bc5003-8dc1-4589-b3aa-f7d50e3a689e is in state STARTED 2026-03-13 00:57:01.148156 | orchestrator | 2026-03-13 00:57:01 | INFO  | Task 52ff5064-272a-4a3f-b0f5-a223baabdd01 is in state STARTED 2026-03-13 00:57:01.148441 | orchestrator | 2026-03-13 00:57:01 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:57:04.184345 | orchestrator | 2026-03-13 00:57:04 | INFO  | Task a4bc5003-8dc1-4589-b3aa-f7d50e3a689e is in state STARTED 2026-03-13 00:57:04.185963 | orchestrator | 2026-03-13 00:57:04 | INFO  | Task 52ff5064-272a-4a3f-b0f5-a223baabdd01 is in state STARTED 2026-03-13 00:57:04.186043 | orchestrator | 2026-03-13 00:57:04 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:57:07.231226 | orchestrator | 2026-03-13 00:57:07 | INFO  | Task a4bc5003-8dc1-4589-b3aa-f7d50e3a689e is in state STARTED 2026-03-13 00:57:07.231994 | orchestrator | 2026-03-13 00:57:07 | INFO  | Task 52ff5064-272a-4a3f-b0f5-a223baabdd01 is in state STARTED 2026-03-13 00:57:07.232052 | orchestrator | 2026-03-13 00:57:07 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:57:10.266889 | orchestrator | 2026-03-13 00:57:10 | INFO  | Task a4bc5003-8dc1-4589-b3aa-f7d50e3a689e is in state STARTED 2026-03-13 00:57:10.269423 | orchestrator | 2026-03-13 00:57:10 | INFO  | Task 52ff5064-272a-4a3f-b0f5-a223baabdd01 is in state STARTED 2026-03-13 00:57:10.269968 | orchestrator | 2026-03-13 00:57:10 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:57:13.316544 | orchestrator | 2026-03-13 00:57:13 | INFO  | Task a4bc5003-8dc1-4589-b3aa-f7d50e3a689e is in state STARTED 2026-03-13 00:57:13.316915 | orchestrator | 2026-03-13 00:57:13 | INFO  | Task 52ff5064-272a-4a3f-b0f5-a223baabdd01 is in state STARTED 2026-03-13 00:57:13.316933 | orchestrator | 2026-03-13 00:57:13 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:57:16.344649 | orchestrator | 2026-03-13 00:57:16.344704 | orchestrator | 2026-03-13 00:57:16.344709 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-03-13 00:57:16.344713 | orchestrator | 2026-03-13 00:57:16.344717 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-03-13 00:57:16.344720 | orchestrator | Friday 13 March 2026 00:54:25 +0000 (0:00:00.121) 0:00:00.121 ********** 2026-03-13 00:57:16.344724 | orchestrator | ok: [localhost] => { 2026-03-13 00:57:16.344728 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-03-13 00:57:16.344732 | orchestrator | } 2026-03-13 00:57:16.344735 | orchestrator | 2026-03-13 00:57:16.344738 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-03-13 00:57:16.344741 | orchestrator | Friday 13 March 2026 00:54:25 +0000 (0:00:00.038) 0:00:00.159 ********** 2026-03-13 00:57:16.344745 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-03-13 00:57:16.344749 | orchestrator | ...ignoring 2026-03-13 00:57:16.344754 | orchestrator | 2026-03-13 00:57:16.344763 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-03-13 00:57:16.344769 | orchestrator | Friday 13 March 2026 00:54:27 +0000 (0:00:02.682) 0:00:02.841 ********** 2026-03-13 00:57:16.344774 | orchestrator | skipping: [localhost] 2026-03-13 00:57:16.344779 | orchestrator | 2026-03-13 00:57:16.344784 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-03-13 00:57:16.344788 | orchestrator | Friday 13 March 2026 00:54:27 +0000 (0:00:00.038) 0:00:02.880 ********** 2026-03-13 00:57:16.344793 | orchestrator | ok: [localhost] 2026-03-13 00:57:16.344797 | orchestrator | 2026-03-13 00:57:16.344802 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-13 00:57:16.344806 | orchestrator | 2026-03-13 00:57:16.344811 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-13 00:57:16.344817 | orchestrator | Friday 13 March 2026 00:54:28 +0000 (0:00:00.134) 0:00:03.014 ********** 2026-03-13 00:57:16.344822 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:57:16.344827 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:57:16.344832 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:57:16.344838 | orchestrator | 2026-03-13 00:57:16.344843 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-13 00:57:16.344860 | orchestrator | Friday 13 March 2026 00:54:28 +0000 (0:00:00.281) 0:00:03.296 ********** 2026-03-13 00:57:16.344864 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-13 00:57:16.344867 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-13 00:57:16.344870 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-13 00:57:16.344874 | orchestrator | 2026-03-13 00:57:16.344879 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-13 00:57:16.344884 | orchestrator | 2026-03-13 00:57:16.344889 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-13 00:57:16.344894 | orchestrator | Friday 13 March 2026 00:54:28 +0000 (0:00:00.498) 0:00:03.795 ********** 2026-03-13 00:57:16.344899 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-13 00:57:16.344904 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-13 00:57:16.344910 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-13 00:57:16.344915 | orchestrator | 2026-03-13 00:57:16.344920 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-13 00:57:16.344925 | orchestrator | Friday 13 March 2026 00:54:29 +0000 (0:00:00.367) 0:00:04.162 ********** 2026-03-13 00:57:16.344939 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:57:16.344945 | orchestrator | 2026-03-13 00:57:16.344949 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-03-13 00:57:16.344952 | orchestrator | Friday 13 March 2026 00:54:29 +0000 (0:00:00.519) 0:00:04.681 ********** 2026-03-13 00:57:16.344968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-13 00:57:16.344973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-13 00:57:16.344982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-13 00:57:16.344986 | orchestrator | 2026-03-13 00:57:16.344992 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-03-13 00:57:16.344998 | orchestrator | Friday 13 March 2026 00:54:32 +0000 (0:00:03.004) 0:00:07.686 ********** 2026-03-13 00:57:16.345005 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:57:16.345011 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:57:16.345016 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:57:16.345021 | orchestrator | 2026-03-13 00:57:16.345025 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-03-13 00:57:16.345031 | orchestrator | Friday 13 March 2026 00:54:33 +0000 (0:00:00.516) 0:00:08.202 ********** 2026-03-13 00:57:16.345035 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:57:16.345040 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:57:16.345045 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:57:16.345056 | orchestrator | 2026-03-13 00:57:16.345062 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-03-13 00:57:16.345067 | orchestrator | Friday 13 March 2026 00:54:34 +0000 (0:00:01.292) 0:00:09.495 ********** 2026-03-13 00:57:16.345075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-13 00:57:16.345084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-13 00:57:16.345091 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-13 00:57:16.345254 | orchestrator | 2026-03-13 00:57:16.345262 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-03-13 00:57:16.345267 | orchestrator | Friday 13 March 2026 00:54:37 +0000 (0:00:03.056) 0:00:12.551 ********** 2026-03-13 00:57:16.345272 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:57:16.345278 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:57:16.345283 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:57:16.345288 | orchestrator | 2026-03-13 00:57:16.345293 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-03-13 00:57:16.345299 | orchestrator | Friday 13 March 2026 00:54:38 +0000 (0:00:01.087) 0:00:13.638 ********** 2026-03-13 00:57:16.345306 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:57:16.345312 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:57:16.345317 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:57:16.345322 | orchestrator | 2026-03-13 00:57:16.345327 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-13 00:57:16.345333 | orchestrator | Friday 13 March 2026 00:54:42 +0000 (0:00:04.336) 0:00:17.975 ********** 2026-03-13 00:57:16.345338 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:57:16.345343 | orchestrator | 2026-03-13 00:57:16.345349 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-13 00:57:16.345354 | orchestrator | Friday 13 March 2026 00:54:43 +0000 (0:00:00.572) 0:00:18.548 ********** 2026-03-13 00:57:16.345369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-13 00:57:16.345378 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:57:16.345385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-13 00:57:16.345391 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:57:16.345401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-13 00:57:16.345410 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:57:16.345415 | orchestrator | 2026-03-13 00:57:16.345421 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-13 00:57:16.345426 | orchestrator | Friday 13 March 2026 00:54:46 +0000 (0:00:03.098) 0:00:21.646 ********** 2026-03-13 00:57:16.345432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-13 00:57:16.345438 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:57:16.345448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-13 00:57:16.345457 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:57:16.345463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-13 00:57:16.345469 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:57:16.345475 | orchestrator | 2026-03-13 00:57:16.345480 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-13 00:57:16.345486 | orchestrator | Friday 13 March 2026 00:54:49 +0000 (0:00:02.798) 0:00:24.445 ********** 2026-03-13 00:57:16.345493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-13 00:57:16.345505 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:57:16.345514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-13 00:57:16.345520 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:57:16.345527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-13 00:57:16.345536 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:57:16.345542 | orchestrator | 2026-03-13 00:57:16.345548 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-03-13 00:57:16.345557 | orchestrator | Friday 13 March 2026 00:54:51 +0000 (0:00:02.485) 0:00:26.930 ********** 2026-03-13 00:57:16.345565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-13 00:57:16.345573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-13 00:57:16.345605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-13 00:57:16.345612 | orchestrator | 2026-03-13 00:57:16.345617 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-03-13 00:57:16.345622 | orchestrator | Friday 13 March 2026 00:54:55 +0000 (0:00:03.216) 0:00:30.147 ********** 2026-03-13 00:57:16.345627 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:57:16.345633 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:57:16.345638 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:57:16.345643 | orchestrator | 2026-03-13 00:57:16.345648 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-03-13 00:57:16.345651 | orchestrator | Friday 13 March 2026 00:54:55 +0000 (0:00:00.816) 0:00:30.964 ********** 2026-03-13 00:57:16.345654 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:57:16.345658 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:57:16.345661 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:57:16.345664 | orchestrator | 2026-03-13 00:57:16.345667 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-03-13 00:57:16.345670 | orchestrator | Friday 13 March 2026 00:54:56 +0000 (0:00:00.414) 0:00:31.378 ********** 2026-03-13 00:57:16.345673 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:57:16.345676 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:57:16.345679 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:57:16.345682 | orchestrator | 2026-03-13 00:57:16.345685 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-03-13 00:57:16.345688 | orchestrator | Friday 13 March 2026 00:54:56 +0000 (0:00:00.477) 0:00:31.856 ********** 2026-03-13 00:57:16.345692 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-03-13 00:57:16.345695 | orchestrator | ...ignoring 2026-03-13 00:57:16.345698 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-03-13 00:57:16.345704 | orchestrator | ...ignoring 2026-03-13 00:57:16.345709 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-03-13 00:57:16.345712 | orchestrator | ...ignoring 2026-03-13 00:57:16.345715 | orchestrator | 2026-03-13 00:57:16.345718 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-03-13 00:57:16.345721 | orchestrator | Friday 13 March 2026 00:55:07 +0000 (0:00:10.864) 0:00:42.720 ********** 2026-03-13 00:57:16.345724 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:57:16.345727 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:57:16.345730 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:57:16.345733 | orchestrator | 2026-03-13 00:57:16.345736 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-03-13 00:57:16.345739 | orchestrator | Friday 13 March 2026 00:55:08 +0000 (0:00:00.432) 0:00:43.152 ********** 2026-03-13 00:57:16.345742 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:57:16.345745 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:57:16.345748 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:57:16.345751 | orchestrator | 2026-03-13 00:57:16.345754 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-03-13 00:57:16.345757 | orchestrator | Friday 13 March 2026 00:55:08 +0000 (0:00:00.642) 0:00:43.795 ********** 2026-03-13 00:57:16.345760 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:57:16.345763 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:57:16.345766 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:57:16.345769 | orchestrator | 2026-03-13 00:57:16.345772 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-03-13 00:57:16.345775 | orchestrator | Friday 13 March 2026 00:55:09 +0000 (0:00:00.480) 0:00:44.275 ********** 2026-03-13 00:57:16.345778 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:57:16.345781 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:57:16.345784 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:57:16.345787 | orchestrator | 2026-03-13 00:57:16.345790 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-03-13 00:57:16.345794 | orchestrator | Friday 13 March 2026 00:55:09 +0000 (0:00:00.438) 0:00:44.714 ********** 2026-03-13 00:57:16.345797 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:57:16.345800 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:57:16.345803 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:57:16.345806 | orchestrator | 2026-03-13 00:57:16.345809 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-03-13 00:57:16.345812 | orchestrator | Friday 13 March 2026 00:55:10 +0000 (0:00:00.489) 0:00:45.203 ********** 2026-03-13 00:57:16.345817 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:57:16.345820 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:57:16.345823 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:57:16.345826 | orchestrator | 2026-03-13 00:57:16.345829 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-13 00:57:16.345832 | orchestrator | Friday 13 March 2026 00:55:10 +0000 (0:00:00.695) 0:00:45.899 ********** 2026-03-13 00:57:16.345835 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:57:16.345838 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:57:16.345841 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-03-13 00:57:16.345844 | orchestrator | 2026-03-13 00:57:16.345848 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-03-13 00:57:16.345851 | orchestrator | Friday 13 March 2026 00:55:11 +0000 (0:00:00.377) 0:00:46.277 ********** 2026-03-13 00:57:16.345854 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:57:16.345857 | orchestrator | 2026-03-13 00:57:16.345860 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-03-13 00:57:16.345863 | orchestrator | Friday 13 March 2026 00:55:21 +0000 (0:00:10.587) 0:00:56.864 ********** 2026-03-13 00:57:16.345868 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:57:16.345871 | orchestrator | 2026-03-13 00:57:16.345874 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-13 00:57:16.345877 | orchestrator | Friday 13 March 2026 00:55:21 +0000 (0:00:00.126) 0:00:56.990 ********** 2026-03-13 00:57:16.345880 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:57:16.345883 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:57:16.345886 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:57:16.345890 | orchestrator | 2026-03-13 00:57:16.345893 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-03-13 00:57:16.345897 | orchestrator | Friday 13 March 2026 00:55:22 +0000 (0:00:00.949) 0:00:57.940 ********** 2026-03-13 00:57:16.345900 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:57:16.345904 | orchestrator | 2026-03-13 00:57:16.345907 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-03-13 00:57:16.345911 | orchestrator | Friday 13 March 2026 00:55:31 +0000 (0:00:08.081) 0:01:06.021 ********** 2026-03-13 00:57:16.345914 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:57:16.345918 | orchestrator | 2026-03-13 00:57:16.345921 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-03-13 00:57:16.345925 | orchestrator | Friday 13 March 2026 00:55:32 +0000 (0:00:01.672) 0:01:07.693 ********** 2026-03-13 00:57:16.345928 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:57:16.345932 | orchestrator | 2026-03-13 00:57:16.345936 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-03-13 00:57:16.345939 | orchestrator | Friday 13 March 2026 00:55:35 +0000 (0:00:02.600) 0:01:10.293 ********** 2026-03-13 00:57:16.345943 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:57:16.345946 | orchestrator | 2026-03-13 00:57:16.345950 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-03-13 00:57:16.345953 | orchestrator | Friday 13 March 2026 00:55:35 +0000 (0:00:00.119) 0:01:10.413 ********** 2026-03-13 00:57:16.345957 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:57:16.345960 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:57:16.345964 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:57:16.345967 | orchestrator | 2026-03-13 00:57:16.345971 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-03-13 00:57:16.345975 | orchestrator | Friday 13 March 2026 00:55:35 +0000 (0:00:00.308) 0:01:10.722 ********** 2026-03-13 00:57:16.345978 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:57:16.345982 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:57:16.345985 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:57:16.345991 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-03-13 00:57:16.345995 | orchestrator | 2026-03-13 00:57:16.345998 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-13 00:57:16.346002 | orchestrator | skipping: no hosts matched 2026-03-13 00:57:16.346005 | orchestrator | 2026-03-13 00:57:16.346009 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-13 00:57:16.346038 | orchestrator | 2026-03-13 00:57:16.346047 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-13 00:57:16.346052 | orchestrator | Friday 13 March 2026 00:55:36 +0000 (0:00:00.579) 0:01:11.301 ********** 2026-03-13 00:57:16.346057 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:57:16.346062 | orchestrator | 2026-03-13 00:57:16.346067 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-13 00:57:16.346072 | orchestrator | Friday 13 March 2026 00:55:52 +0000 (0:00:15.835) 0:01:27.136 ********** 2026-03-13 00:57:16.346077 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:57:16.346082 | orchestrator | 2026-03-13 00:57:16.346087 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-13 00:57:16.346093 | orchestrator | Friday 13 March 2026 00:56:07 +0000 (0:00:15.662) 0:01:42.798 ********** 2026-03-13 00:57:16.346098 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:57:16.346103 | orchestrator | 2026-03-13 00:57:16.346113 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-13 00:57:16.346119 | orchestrator | 2026-03-13 00:57:16.346124 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-13 00:57:16.346130 | orchestrator | Friday 13 March 2026 00:56:10 +0000 (0:00:02.234) 0:01:45.033 ********** 2026-03-13 00:57:16.346135 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:57:16.346140 | orchestrator | 2026-03-13 00:57:16.346144 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-13 00:57:16.346148 | orchestrator | Friday 13 March 2026 00:56:27 +0000 (0:00:17.117) 0:02:02.151 ********** 2026-03-13 00:57:16.346152 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:57:16.346155 | orchestrator | 2026-03-13 00:57:16.346159 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-13 00:57:16.346162 | orchestrator | Friday 13 March 2026 00:56:43 +0000 (0:00:16.118) 0:02:18.269 ********** 2026-03-13 00:57:16.346166 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:57:16.346170 | orchestrator | 2026-03-13 00:57:16.346173 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-13 00:57:16.346176 | orchestrator | 2026-03-13 00:57:16.346183 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-13 00:57:16.346187 | orchestrator | Friday 13 March 2026 00:56:45 +0000 (0:00:02.501) 0:02:20.770 ********** 2026-03-13 00:57:16.346190 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:57:16.346194 | orchestrator | 2026-03-13 00:57:16.346197 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-13 00:57:16.346201 | orchestrator | Friday 13 March 2026 00:56:56 +0000 (0:00:10.277) 0:02:31.048 ********** 2026-03-13 00:57:16.346204 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:57:16.346208 | orchestrator | 2026-03-13 00:57:16.346211 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-13 00:57:16.346215 | orchestrator | Friday 13 March 2026 00:57:00 +0000 (0:00:04.532) 0:02:35.580 ********** 2026-03-13 00:57:16.346219 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:57:16.346222 | orchestrator | 2026-03-13 00:57:16.346226 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-13 00:57:16.346229 | orchestrator | 2026-03-13 00:57:16.346232 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-13 00:57:16.346236 | orchestrator | Friday 13 March 2026 00:57:03 +0000 (0:00:02.489) 0:02:38.070 ********** 2026-03-13 00:57:16.346239 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:57:16.346243 | orchestrator | 2026-03-13 00:57:16.346246 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-03-13 00:57:16.346250 | orchestrator | Friday 13 March 2026 00:57:03 +0000 (0:00:00.517) 0:02:38.587 ********** 2026-03-13 00:57:16.346253 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:57:16.346257 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:57:16.346260 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:57:16.346264 | orchestrator | 2026-03-13 00:57:16.346267 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-03-13 00:57:16.346271 | orchestrator | Friday 13 March 2026 00:57:05 +0000 (0:00:02.136) 0:02:40.724 ********** 2026-03-13 00:57:16.346274 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:57:16.346278 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:57:16.346281 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:57:16.346285 | orchestrator | 2026-03-13 00:57:16.346288 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-03-13 00:57:16.346292 | orchestrator | Friday 13 March 2026 00:57:08 +0000 (0:00:02.381) 0:02:43.106 ********** 2026-03-13 00:57:16.346296 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:57:16.346299 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:57:16.346302 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:57:16.346305 | orchestrator | 2026-03-13 00:57:16.346308 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-03-13 00:57:16.346314 | orchestrator | Friday 13 March 2026 00:57:10 +0000 (0:00:02.610) 0:02:45.716 ********** 2026-03-13 00:57:16.346317 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:57:16.346320 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:57:16.346323 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:57:16.346326 | orchestrator | 2026-03-13 00:57:16.346329 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-03-13 00:57:16.346332 | orchestrator | Friday 13 March 2026 00:57:12 +0000 (0:00:01.995) 0:02:47.712 ********** 2026-03-13 00:57:16.346335 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:57:16.346338 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:57:16.346341 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:57:16.346344 | orchestrator | 2026-03-13 00:57:16.346347 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-13 00:57:16.346350 | orchestrator | Friday 13 March 2026 00:57:15 +0000 (0:00:02.823) 0:02:50.536 ********** 2026-03-13 00:57:16.346356 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:57:16.346359 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:57:16.346362 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:57:16.346365 | orchestrator | 2026-03-13 00:57:16.346368 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 00:57:16.346371 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-13 00:57:16.346374 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-03-13 00:57:16.346378 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-13 00:57:16.346381 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-13 00:57:16.346384 | orchestrator | 2026-03-13 00:57:16.346387 | orchestrator | 2026-03-13 00:57:16.346390 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 00:57:16.346393 | orchestrator | Friday 13 March 2026 00:57:15 +0000 (0:00:00.202) 0:02:50.739 ********** 2026-03-13 00:57:16.346396 | orchestrator | =============================================================================== 2026-03-13 00:57:16.346399 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 32.95s 2026-03-13 00:57:16.346402 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 31.78s 2026-03-13 00:57:16.346405 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.86s 2026-03-13 00:57:16.346408 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.59s 2026-03-13 00:57:16.346411 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 10.28s 2026-03-13 00:57:16.346414 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.08s 2026-03-13 00:57:16.346419 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.74s 2026-03-13 00:57:16.346422 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.53s 2026-03-13 00:57:16.346425 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.34s 2026-03-13 00:57:16.346428 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.22s 2026-03-13 00:57:16.346431 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.10s 2026-03-13 00:57:16.346434 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.06s 2026-03-13 00:57:16.346437 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.00s 2026-03-13 00:57:16.346450 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.82s 2026-03-13 00:57:16.346459 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.80s 2026-03-13 00:57:16.346467 | orchestrator | Check MariaDB service --------------------------------------------------- 2.68s 2026-03-13 00:57:16.346472 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.61s 2026-03-13 00:57:16.346478 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.60s 2026-03-13 00:57:16.346483 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.49s 2026-03-13 00:57:16.346488 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.49s 2026-03-13 00:57:16.346494 | orchestrator | 2026-03-13 00:57:16 | INFO  | Task a4bc5003-8dc1-4589-b3aa-f7d50e3a689e is in state SUCCESS 2026-03-13 00:57:16.346499 | orchestrator | 2026-03-13 00:57:16 | INFO  | Task 52ff5064-272a-4a3f-b0f5-a223baabdd01 is in state STARTED 2026-03-13 00:57:16.346504 | orchestrator | 2026-03-13 00:57:16 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:57:19.391654 | orchestrator | 2026-03-13 00:57:19 | INFO  | Task 9e3c0dc2-27f9-4133-a7e6-aef4b260d5d5 is in state STARTED 2026-03-13 00:57:19.393634 | orchestrator | 2026-03-13 00:57:19 | INFO  | Task 90aee443-daf9-4afa-826d-edcd2015bac1 is in state STARTED 2026-03-13 00:57:19.395946 | orchestrator | 2026-03-13 00:57:19 | INFO  | Task 52ff5064-272a-4a3f-b0f5-a223baabdd01 is in state STARTED 2026-03-13 00:57:19.396021 | orchestrator | 2026-03-13 00:57:19 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:57:22.444757 | orchestrator | 2026-03-13 00:57:22 | INFO  | Task 9e3c0dc2-27f9-4133-a7e6-aef4b260d5d5 is in state STARTED 2026-03-13 00:57:22.445743 | orchestrator | 2026-03-13 00:57:22 | INFO  | Task 90aee443-daf9-4afa-826d-edcd2015bac1 is in state STARTED 2026-03-13 00:57:22.447085 | orchestrator | 2026-03-13 00:57:22 | INFO  | Task 52ff5064-272a-4a3f-b0f5-a223baabdd01 is in state STARTED 2026-03-13 00:57:22.447484 | orchestrator | 2026-03-13 00:57:22 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:57:25.483632 | orchestrator | 2026-03-13 00:57:25 | INFO  | Task 9e3c0dc2-27f9-4133-a7e6-aef4b260d5d5 is in state STARTED 2026-03-13 00:57:25.484743 | orchestrator | 2026-03-13 00:57:25 | INFO  | Task 90aee443-daf9-4afa-826d-edcd2015bac1 is in state STARTED 2026-03-13 00:57:25.486825 | orchestrator | 2026-03-13 00:57:25 | INFO  | Task 52ff5064-272a-4a3f-b0f5-a223baabdd01 is in state STARTED 2026-03-13 00:57:25.486865 | orchestrator | 2026-03-13 00:57:25 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:57:28.523256 | orchestrator | 2026-03-13 00:57:28 | INFO  | Task 9e3c0dc2-27f9-4133-a7e6-aef4b260d5d5 is in state STARTED 2026-03-13 00:57:28.524002 | orchestrator | 2026-03-13 00:57:28 | INFO  | Task 90aee443-daf9-4afa-826d-edcd2015bac1 is in state STARTED 2026-03-13 00:57:28.524999 | orchestrator | 2026-03-13 00:57:28 | INFO  | Task 52ff5064-272a-4a3f-b0f5-a223baabdd01 is in state STARTED 2026-03-13 00:57:28.525018 | orchestrator | 2026-03-13 00:57:28 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:57:31.567168 | orchestrator | 2026-03-13 00:57:31 | INFO  | Task 9e3c0dc2-27f9-4133-a7e6-aef4b260d5d5 is in state STARTED 2026-03-13 00:57:31.569660 | orchestrator | 2026-03-13 00:57:31 | INFO  | Task 90aee443-daf9-4afa-826d-edcd2015bac1 is in state STARTED 2026-03-13 00:57:31.572901 | orchestrator | 2026-03-13 00:57:31 | INFO  | Task 52ff5064-272a-4a3f-b0f5-a223baabdd01 is in state STARTED 2026-03-13 00:57:31.572957 | orchestrator | 2026-03-13 00:57:31 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:57:34.612706 | orchestrator | 2026-03-13 00:57:34 | INFO  | Task 9e3c0dc2-27f9-4133-a7e6-aef4b260d5d5 is in state STARTED 2026-03-13 00:57:34.612766 | orchestrator | 2026-03-13 00:57:34 | INFO  | Task 90aee443-daf9-4afa-826d-edcd2015bac1 is in state STARTED 2026-03-13 00:57:34.613622 | orchestrator | 2026-03-13 00:57:34 | INFO  | Task 52ff5064-272a-4a3f-b0f5-a223baabdd01 is in state STARTED 2026-03-13 00:57:34.615160 | orchestrator | 2026-03-13 00:57:34 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:57:37.654330 | orchestrator | 2026-03-13 00:57:37 | INFO  | Task 9e3c0dc2-27f9-4133-a7e6-aef4b260d5d5 is in state STARTED 2026-03-13 00:57:37.654821 | orchestrator | 2026-03-13 00:57:37 | INFO  | Task 90aee443-daf9-4afa-826d-edcd2015bac1 is in state STARTED 2026-03-13 00:57:37.655758 | orchestrator | 2026-03-13 00:57:37 | INFO  | Task 52ff5064-272a-4a3f-b0f5-a223baabdd01 is in state STARTED 2026-03-13 00:57:37.655789 | orchestrator | 2026-03-13 00:57:37 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:57:40.690239 | orchestrator | 2026-03-13 00:57:40 | INFO  | Task 9e3c0dc2-27f9-4133-a7e6-aef4b260d5d5 is in state STARTED 2026-03-13 00:57:40.690677 | orchestrator | 2026-03-13 00:57:40 | INFO  | Task 90aee443-daf9-4afa-826d-edcd2015bac1 is in state STARTED 2026-03-13 00:57:40.691901 | orchestrator | 2026-03-13 00:57:40 | INFO  | Task 52ff5064-272a-4a3f-b0f5-a223baabdd01 is in state STARTED 2026-03-13 00:57:40.692255 | orchestrator | 2026-03-13 00:57:40 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:57:43.731738 | orchestrator | 2026-03-13 00:57:43 | INFO  | Task 9e3c0dc2-27f9-4133-a7e6-aef4b260d5d5 is in state STARTED 2026-03-13 00:57:43.732198 | orchestrator | 2026-03-13 00:57:43 | INFO  | Task 90aee443-daf9-4afa-826d-edcd2015bac1 is in state STARTED 2026-03-13 00:57:43.733095 | orchestrator | 2026-03-13 00:57:43 | INFO  | Task 52ff5064-272a-4a3f-b0f5-a223baabdd01 is in state STARTED 2026-03-13 00:57:43.733124 | orchestrator | 2026-03-13 00:57:43 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:57:46.768603 | orchestrator | 2026-03-13 00:57:46 | INFO  | Task 9e3c0dc2-27f9-4133-a7e6-aef4b260d5d5 is in state STARTED 2026-03-13 00:57:46.769246 | orchestrator | 2026-03-13 00:57:46 | INFO  | Task 90aee443-daf9-4afa-826d-edcd2015bac1 is in state STARTED 2026-03-13 00:57:46.770031 | orchestrator | 2026-03-13 00:57:46 | INFO  | Task 52ff5064-272a-4a3f-b0f5-a223baabdd01 is in state STARTED 2026-03-13 00:57:46.770058 | orchestrator | 2026-03-13 00:57:46 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:57:49.818224 | orchestrator | 2026-03-13 00:57:49 | INFO  | Task 9e3c0dc2-27f9-4133-a7e6-aef4b260d5d5 is in state STARTED 2026-03-13 00:57:49.820330 | orchestrator | 2026-03-13 00:57:49 | INFO  | Task 90aee443-daf9-4afa-826d-edcd2015bac1 is in state STARTED 2026-03-13 00:57:49.822323 | orchestrator | 2026-03-13 00:57:49 | INFO  | Task 52ff5064-272a-4a3f-b0f5-a223baabdd01 is in state STARTED 2026-03-13 00:57:49.822413 | orchestrator | 2026-03-13 00:57:49 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:57:52.859591 | orchestrator | 2026-03-13 00:57:52 | INFO  | Task 9e3c0dc2-27f9-4133-a7e6-aef4b260d5d5 is in state STARTED 2026-03-13 00:57:52.860912 | orchestrator | 2026-03-13 00:57:52 | INFO  | Task 90aee443-daf9-4afa-826d-edcd2015bac1 is in state STARTED 2026-03-13 00:57:52.862467 | orchestrator | 2026-03-13 00:57:52 | INFO  | Task 52ff5064-272a-4a3f-b0f5-a223baabdd01 is in state STARTED 2026-03-13 00:57:52.862658 | orchestrator | 2026-03-13 00:57:52 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:57:55.907663 | orchestrator | 2026-03-13 00:57:55 | INFO  | Task 9e3c0dc2-27f9-4133-a7e6-aef4b260d5d5 is in state STARTED 2026-03-13 00:57:55.907742 | orchestrator | 2026-03-13 00:57:55 | INFO  | Task 90aee443-daf9-4afa-826d-edcd2015bac1 is in state STARTED 2026-03-13 00:57:55.909669 | orchestrator | 2026-03-13 00:57:55 | INFO  | Task 52ff5064-272a-4a3f-b0f5-a223baabdd01 is in state STARTED 2026-03-13 00:57:55.909721 | orchestrator | 2026-03-13 00:57:55 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:57:58.942932 | orchestrator | 2026-03-13 00:57:58 | INFO  | Task 9e3c0dc2-27f9-4133-a7e6-aef4b260d5d5 is in state STARTED 2026-03-13 00:57:58.943445 | orchestrator | 2026-03-13 00:57:58 | INFO  | Task 90aee443-daf9-4afa-826d-edcd2015bac1 is in state STARTED 2026-03-13 00:57:58.945585 | orchestrator | 2026-03-13 00:57:58 | INFO  | Task 52ff5064-272a-4a3f-b0f5-a223baabdd01 is in state STARTED 2026-03-13 00:57:58.946222 | orchestrator | 2026-03-13 00:57:58 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:58:01.991828 | orchestrator | 2026-03-13 00:58:01 | INFO  | Task 9e3c0dc2-27f9-4133-a7e6-aef4b260d5d5 is in state STARTED 2026-03-13 00:58:01.994126 | orchestrator | 2026-03-13 00:58:01 | INFO  | Task 90aee443-daf9-4afa-826d-edcd2015bac1 is in state STARTED 2026-03-13 00:58:01.996144 | orchestrator | 2026-03-13 00:58:01 | INFO  | Task 52ff5064-272a-4a3f-b0f5-a223baabdd01 is in state STARTED 2026-03-13 00:58:01.996184 | orchestrator | 2026-03-13 00:58:01 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:58:05.035404 | orchestrator | 2026-03-13 00:58:05 | INFO  | Task 9e3c0dc2-27f9-4133-a7e6-aef4b260d5d5 is in state STARTED 2026-03-13 00:58:05.035775 | orchestrator | 2026-03-13 00:58:05 | INFO  | Task 90aee443-daf9-4afa-826d-edcd2015bac1 is in state STARTED 2026-03-13 00:58:05.036343 | orchestrator | 2026-03-13 00:58:05 | INFO  | Task 52ff5064-272a-4a3f-b0f5-a223baabdd01 is in state STARTED 2026-03-13 00:58:05.036373 | orchestrator | 2026-03-13 00:58:05 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:58:08.072508 | orchestrator | 2026-03-13 00:58:08 | INFO  | Task 9e3c0dc2-27f9-4133-a7e6-aef4b260d5d5 is in state STARTED 2026-03-13 00:58:08.076145 | orchestrator | 2026-03-13 00:58:08 | INFO  | Task 90aee443-daf9-4afa-826d-edcd2015bac1 is in state STARTED 2026-03-13 00:58:08.076194 | orchestrator | 2026-03-13 00:58:08 | INFO  | Task 52ff5064-272a-4a3f-b0f5-a223baabdd01 is in state STARTED 2026-03-13 00:58:08.076199 | orchestrator | 2026-03-13 00:58:08 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:58:11.109151 | orchestrator | 2026-03-13 00:58:11 | INFO  | Task 9e3c0dc2-27f9-4133-a7e6-aef4b260d5d5 is in state STARTED 2026-03-13 00:58:11.111556 | orchestrator | 2026-03-13 00:58:11 | INFO  | Task 90aee443-daf9-4afa-826d-edcd2015bac1 is in state STARTED 2026-03-13 00:58:11.112545 | orchestrator | 2026-03-13 00:58:11 | INFO  | Task 52ff5064-272a-4a3f-b0f5-a223baabdd01 is in state STARTED 2026-03-13 00:58:11.113008 | orchestrator | 2026-03-13 00:58:11 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:58:14.152222 | orchestrator | 2026-03-13 00:58:14 | INFO  | Task 9e3c0dc2-27f9-4133-a7e6-aef4b260d5d5 is in state STARTED 2026-03-13 00:58:14.153849 | orchestrator | 2026-03-13 00:58:14 | INFO  | Task 90aee443-daf9-4afa-826d-edcd2015bac1 is in state STARTED 2026-03-13 00:58:14.154913 | orchestrator | 2026-03-13 00:58:14 | INFO  | Task 52ff5064-272a-4a3f-b0f5-a223baabdd01 is in state STARTED 2026-03-13 00:58:14.154948 | orchestrator | 2026-03-13 00:58:14 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:58:17.194807 | orchestrator | 2026-03-13 00:58:17 | INFO  | Task 9e3c0dc2-27f9-4133-a7e6-aef4b260d5d5 is in state STARTED 2026-03-13 00:58:17.196833 | orchestrator | 2026-03-13 00:58:17 | INFO  | Task 90aee443-daf9-4afa-826d-edcd2015bac1 is in state STARTED 2026-03-13 00:58:17.198217 | orchestrator | 2026-03-13 00:58:17 | INFO  | Task 52ff5064-272a-4a3f-b0f5-a223baabdd01 is in state STARTED 2026-03-13 00:58:17.198272 | orchestrator | 2026-03-13 00:58:17 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:58:20.237911 | orchestrator | 2026-03-13 00:58:20 | INFO  | Task 9e3c0dc2-27f9-4133-a7e6-aef4b260d5d5 is in state STARTED 2026-03-13 00:58:20.240071 | orchestrator | 2026-03-13 00:58:20 | INFO  | Task 90aee443-daf9-4afa-826d-edcd2015bac1 is in state STARTED 2026-03-13 00:58:20.242233 | orchestrator | 2026-03-13 00:58:20 | INFO  | Task 52ff5064-272a-4a3f-b0f5-a223baabdd01 is in state STARTED 2026-03-13 00:58:20.242269 | orchestrator | 2026-03-13 00:58:20 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:58:23.285470 | orchestrator | 2026-03-13 00:58:23 | INFO  | Task 9e3c0dc2-27f9-4133-a7e6-aef4b260d5d5 is in state STARTED 2026-03-13 00:58:23.287156 | orchestrator | 2026-03-13 00:58:23 | INFO  | Task 90aee443-daf9-4afa-826d-edcd2015bac1 is in state STARTED 2026-03-13 00:58:23.289368 | orchestrator | 2026-03-13 00:58:23 | INFO  | Task 52ff5064-272a-4a3f-b0f5-a223baabdd01 is in state STARTED 2026-03-13 00:58:23.289470 | orchestrator | 2026-03-13 00:58:23 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:58:26.330765 | orchestrator | 2026-03-13 00:58:26 | INFO  | Task 9e3c0dc2-27f9-4133-a7e6-aef4b260d5d5 is in state STARTED 2026-03-13 00:58:26.332266 | orchestrator | 2026-03-13 00:58:26 | INFO  | Task 90aee443-daf9-4afa-826d-edcd2015bac1 is in state STARTED 2026-03-13 00:58:26.333680 | orchestrator | 2026-03-13 00:58:26 | INFO  | Task 52ff5064-272a-4a3f-b0f5-a223baabdd01 is in state STARTED 2026-03-13 00:58:26.333707 | orchestrator | 2026-03-13 00:58:26 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:58:29.374274 | orchestrator | 2026-03-13 00:58:29 | INFO  | Task 9e3c0dc2-27f9-4133-a7e6-aef4b260d5d5 is in state STARTED 2026-03-13 00:58:29.374331 | orchestrator | 2026-03-13 00:58:29 | INFO  | Task 90aee443-daf9-4afa-826d-edcd2015bac1 is in state STARTED 2026-03-13 00:58:29.374771 | orchestrator | 2026-03-13 00:58:29 | INFO  | Task 52ff5064-272a-4a3f-b0f5-a223baabdd01 is in state STARTED 2026-03-13 00:58:29.374796 | orchestrator | 2026-03-13 00:58:29 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:58:32.416883 | orchestrator | 2026-03-13 00:58:32 | INFO  | Task 9e3c0dc2-27f9-4133-a7e6-aef4b260d5d5 is in state STARTED 2026-03-13 00:58:32.418811 | orchestrator | 2026-03-13 00:58:32 | INFO  | Task 90aee443-daf9-4afa-826d-edcd2015bac1 is in state STARTED 2026-03-13 00:58:32.420761 | orchestrator | 2026-03-13 00:58:32 | INFO  | Task 52ff5064-272a-4a3f-b0f5-a223baabdd01 is in state STARTED 2026-03-13 00:58:32.420803 | orchestrator | 2026-03-13 00:58:32 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:58:35.458678 | orchestrator | 2026-03-13 00:58:35 | INFO  | Task 9e3c0dc2-27f9-4133-a7e6-aef4b260d5d5 is in state STARTED 2026-03-13 00:58:35.461213 | orchestrator | 2026-03-13 00:58:35 | INFO  | Task 90aee443-daf9-4afa-826d-edcd2015bac1 is in state STARTED 2026-03-13 00:58:35.462785 | orchestrator | 2026-03-13 00:58:35 | INFO  | Task 52ff5064-272a-4a3f-b0f5-a223baabdd01 is in state STARTED 2026-03-13 00:58:35.462894 | orchestrator | 2026-03-13 00:58:35 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:58:38.502757 | orchestrator | 2026-03-13 00:58:38 | INFO  | Task 9e3c0dc2-27f9-4133-a7e6-aef4b260d5d5 is in state STARTED 2026-03-13 00:58:38.506836 | orchestrator | 2026-03-13 00:58:38 | INFO  | Task 90aee443-daf9-4afa-826d-edcd2015bac1 is in state STARTED 2026-03-13 00:58:38.508596 | orchestrator | 2026-03-13 00:58:38 | INFO  | Task 52ff5064-272a-4a3f-b0f5-a223baabdd01 is in state STARTED 2026-03-13 00:58:38.508662 | orchestrator | 2026-03-13 00:58:38 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:58:41.546647 | orchestrator | 2026-03-13 00:58:41 | INFO  | Task 9e3c0dc2-27f9-4133-a7e6-aef4b260d5d5 is in state STARTED 2026-03-13 00:58:41.550834 | orchestrator | 2026-03-13 00:58:41 | INFO  | Task 90aee443-daf9-4afa-826d-edcd2015bac1 is in state SUCCESS 2026-03-13 00:58:41.552243 | orchestrator | 2026-03-13 00:58:41.552299 | orchestrator | 2026-03-13 00:58:41.552310 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-13 00:58:41.552318 | orchestrator | 2026-03-13 00:58:41.552326 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-13 00:58:41.552333 | orchestrator | Friday 13 March 2026 00:57:19 +0000 (0:00:00.234) 0:00:00.234 ********** 2026-03-13 00:58:41.552340 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:58:41.552347 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:58:41.552355 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:58:41.552361 | orchestrator | 2026-03-13 00:58:41.552565 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-13 00:58:41.552580 | orchestrator | Friday 13 March 2026 00:57:20 +0000 (0:00:00.298) 0:00:00.532 ********** 2026-03-13 00:58:41.552587 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-03-13 00:58:41.552593 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-03-13 00:58:41.552599 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-03-13 00:58:41.552605 | orchestrator | 2026-03-13 00:58:41.552612 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-03-13 00:58:41.552619 | orchestrator | 2026-03-13 00:58:41.552636 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-13 00:58:41.552643 | orchestrator | Friday 13 March 2026 00:57:20 +0000 (0:00:00.438) 0:00:00.970 ********** 2026-03-13 00:58:41.552649 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:58:41.552656 | orchestrator | 2026-03-13 00:58:41.552663 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-03-13 00:58:41.552670 | orchestrator | Friday 13 March 2026 00:57:20 +0000 (0:00:00.486) 0:00:01.457 ********** 2026-03-13 00:58:41.552682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-13 00:58:41.552725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-13 00:58:41.552734 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-13 00:58:41.552747 | orchestrator | 2026-03-13 00:58:41.552754 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-03-13 00:58:41.552761 | orchestrator | Friday 13 March 2026 00:57:22 +0000 (0:00:01.286) 0:00:02.743 ********** 2026-03-13 00:58:41.552768 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:58:41.552774 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:58:41.552780 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:58:41.552787 | orchestrator | 2026-03-13 00:58:41.552794 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-13 00:58:41.552800 | orchestrator | Friday 13 March 2026 00:57:22 +0000 (0:00:00.427) 0:00:03.171 ********** 2026-03-13 00:58:41.552807 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-13 00:58:41.552819 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-13 00:58:41.552826 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-03-13 00:58:41.552833 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-03-13 00:58:41.552840 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-03-13 00:58:41.552846 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-03-13 00:58:41.552852 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-03-13 00:58:41.552859 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-03-13 00:58:41.552865 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-13 00:58:41.552870 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-13 00:58:41.552880 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-03-13 00:58:41.552887 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-03-13 00:58:41.552894 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-03-13 00:58:41.552900 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-03-13 00:58:41.552906 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-03-13 00:58:41.552912 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-03-13 00:58:41.552918 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-13 00:58:41.552925 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-13 00:58:41.552931 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-03-13 00:58:41.552938 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-03-13 00:58:41.552945 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-03-13 00:58:41.552951 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-03-13 00:58:41.552958 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-03-13 00:58:41.552964 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-03-13 00:58:41.552978 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-03-13 00:58:41.552986 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-03-13 00:58:41.552992 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-03-13 00:58:41.552999 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-03-13 00:58:41.553006 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-03-13 00:58:41.553012 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-03-13 00:58:41.553019 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-03-13 00:58:41.553025 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-03-13 00:58:41.553032 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-03-13 00:58:41.553039 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-03-13 00:58:41.553046 | orchestrator | 2026-03-13 00:58:41.553053 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-13 00:58:41.553060 | orchestrator | Friday 13 March 2026 00:57:23 +0000 (0:00:00.766) 0:00:03.937 ********** 2026-03-13 00:58:41.553066 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:58:41.553073 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:58:41.553080 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:58:41.553087 | orchestrator | 2026-03-13 00:58:41.553094 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-13 00:58:41.553100 | orchestrator | Friday 13 March 2026 00:57:23 +0000 (0:00:00.299) 0:00:04.236 ********** 2026-03-13 00:58:41.553106 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:58:41.553113 | orchestrator | 2026-03-13 00:58:41.553124 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-13 00:58:41.553131 | orchestrator | Friday 13 March 2026 00:57:23 +0000 (0:00:00.128) 0:00:04.364 ********** 2026-03-13 00:58:41.553137 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:58:41.553143 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:58:41.553169 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:58:41.553176 | orchestrator | 2026-03-13 00:58:41.553183 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-13 00:58:41.553190 | orchestrator | Friday 13 March 2026 00:57:24 +0000 (0:00:00.475) 0:00:04.840 ********** 2026-03-13 00:58:41.553197 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:58:41.553206 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:58:41.553213 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:58:41.553220 | orchestrator | 2026-03-13 00:58:41.553227 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-13 00:58:41.553234 | orchestrator | Friday 13 March 2026 00:57:24 +0000 (0:00:00.302) 0:00:05.143 ********** 2026-03-13 00:58:41.553242 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:58:41.553249 | orchestrator | 2026-03-13 00:58:41.553261 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-13 00:58:41.553268 | orchestrator | Friday 13 March 2026 00:57:24 +0000 (0:00:00.118) 0:00:05.261 ********** 2026-03-13 00:58:41.553281 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:58:41.553289 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:58:41.553295 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:58:41.553302 | orchestrator | 2026-03-13 00:58:41.553308 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-13 00:58:41.553315 | orchestrator | Friday 13 March 2026 00:57:25 +0000 (0:00:00.278) 0:00:05.539 ********** 2026-03-13 00:58:41.553322 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:58:41.553329 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:58:41.553336 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:58:41.553343 | orchestrator | 2026-03-13 00:58:41.553350 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-13 00:58:41.553356 | orchestrator | Friday 13 March 2026 00:57:25 +0000 (0:00:00.370) 0:00:05.909 ********** 2026-03-13 00:58:41.553363 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:58:41.553369 | orchestrator | 2026-03-13 00:58:41.553376 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-13 00:58:41.553383 | orchestrator | Friday 13 March 2026 00:57:25 +0000 (0:00:00.312) 0:00:06.221 ********** 2026-03-13 00:58:41.553389 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:58:41.553395 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:58:41.553402 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:58:41.553408 | orchestrator | 2026-03-13 00:58:41.553415 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-13 00:58:41.553422 | orchestrator | Friday 13 March 2026 00:57:25 +0000 (0:00:00.273) 0:00:06.494 ********** 2026-03-13 00:58:41.553428 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:58:41.553435 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:58:41.553442 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:58:41.553448 | orchestrator | 2026-03-13 00:58:41.553455 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-13 00:58:41.553462 | orchestrator | Friday 13 March 2026 00:57:26 +0000 (0:00:00.319) 0:00:06.814 ********** 2026-03-13 00:58:41.553484 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:58:41.553492 | orchestrator | 2026-03-13 00:58:41.553499 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-13 00:58:41.553505 | orchestrator | Friday 13 March 2026 00:57:26 +0000 (0:00:00.139) 0:00:06.953 ********** 2026-03-13 00:58:41.553510 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:58:41.553516 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:58:41.553521 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:58:41.553527 | orchestrator | 2026-03-13 00:58:41.553627 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-13 00:58:41.553637 | orchestrator | Friday 13 March 2026 00:57:26 +0000 (0:00:00.289) 0:00:07.243 ********** 2026-03-13 00:58:41.553644 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:58:41.553650 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:58:41.553657 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:58:41.553664 | orchestrator | 2026-03-13 00:58:41.553671 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-13 00:58:41.553678 | orchestrator | Friday 13 March 2026 00:57:27 +0000 (0:00:00.495) 0:00:07.738 ********** 2026-03-13 00:58:41.553684 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:58:41.553690 | orchestrator | 2026-03-13 00:58:41.553696 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-13 00:58:41.553702 | orchestrator | Friday 13 March 2026 00:57:27 +0000 (0:00:00.124) 0:00:07.863 ********** 2026-03-13 00:58:41.553708 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:58:41.553715 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:58:41.553721 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:58:41.553728 | orchestrator | 2026-03-13 00:58:41.553735 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-13 00:58:41.553741 | orchestrator | Friday 13 March 2026 00:57:27 +0000 (0:00:00.284) 0:00:08.147 ********** 2026-03-13 00:58:41.553755 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:58:41.553762 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:58:41.553769 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:58:41.553775 | orchestrator | 2026-03-13 00:58:41.553782 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-13 00:58:41.553850 | orchestrator | Friday 13 March 2026 00:57:27 +0000 (0:00:00.328) 0:00:08.476 ********** 2026-03-13 00:58:41.553861 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:58:41.553868 | orchestrator | 2026-03-13 00:58:41.553875 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-13 00:58:41.553881 | orchestrator | Friday 13 March 2026 00:57:28 +0000 (0:00:00.130) 0:00:08.607 ********** 2026-03-13 00:58:41.553888 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:58:41.553894 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:58:41.553901 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:58:41.553907 | orchestrator | 2026-03-13 00:58:41.553916 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-13 00:58:41.553933 | orchestrator | Friday 13 March 2026 00:57:28 +0000 (0:00:00.278) 0:00:08.885 ********** 2026-03-13 00:58:41.553939 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:58:41.553945 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:58:41.553951 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:58:41.553956 | orchestrator | 2026-03-13 00:58:41.553962 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-13 00:58:41.553969 | orchestrator | Friday 13 March 2026 00:57:28 +0000 (0:00:00.536) 0:00:09.421 ********** 2026-03-13 00:58:41.553974 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:58:41.553980 | orchestrator | 2026-03-13 00:58:41.553987 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-13 00:58:41.553996 | orchestrator | Friday 13 March 2026 00:57:29 +0000 (0:00:00.128) 0:00:09.550 ********** 2026-03-13 00:58:41.554002 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:58:41.554009 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:58:41.554132 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:58:41.554147 | orchestrator | 2026-03-13 00:58:41.554154 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-13 00:58:41.554160 | orchestrator | Friday 13 March 2026 00:57:29 +0000 (0:00:00.273) 0:00:09.824 ********** 2026-03-13 00:58:41.554166 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:58:41.554179 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:58:41.554185 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:58:41.554192 | orchestrator | 2026-03-13 00:58:41.554198 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-13 00:58:41.554208 | orchestrator | Friday 13 March 2026 00:57:29 +0000 (0:00:00.304) 0:00:10.129 ********** 2026-03-13 00:58:41.554214 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:58:41.554220 | orchestrator | 2026-03-13 00:58:41.554225 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-13 00:58:41.554231 | orchestrator | Friday 13 March 2026 00:57:29 +0000 (0:00:00.140) 0:00:10.269 ********** 2026-03-13 00:58:41.554237 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:58:41.554243 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:58:41.554249 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:58:41.554255 | orchestrator | 2026-03-13 00:58:41.554261 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-13 00:58:41.554267 | orchestrator | Friday 13 March 2026 00:57:30 +0000 (0:00:00.324) 0:00:10.593 ********** 2026-03-13 00:58:41.554274 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:58:41.554279 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:58:41.554285 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:58:41.554291 | orchestrator | 2026-03-13 00:58:41.554297 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-13 00:58:41.554302 | orchestrator | Friday 13 March 2026 00:57:30 +0000 (0:00:00.493) 0:00:11.087 ********** 2026-03-13 00:58:41.554308 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:58:41.554322 | orchestrator | 2026-03-13 00:58:41.554328 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-13 00:58:41.554334 | orchestrator | Friday 13 March 2026 00:57:30 +0000 (0:00:00.145) 0:00:11.233 ********** 2026-03-13 00:58:41.554343 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:58:41.554350 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:58:41.554356 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:58:41.554363 | orchestrator | 2026-03-13 00:58:41.554369 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-13 00:58:41.554375 | orchestrator | Friday 13 March 2026 00:57:31 +0000 (0:00:00.295) 0:00:11.528 ********** 2026-03-13 00:58:41.554382 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:58:41.554387 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:58:41.554394 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:58:41.554400 | orchestrator | 2026-03-13 00:58:41.554406 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-13 00:58:41.554412 | orchestrator | Friday 13 March 2026 00:57:31 +0000 (0:00:00.293) 0:00:11.822 ********** 2026-03-13 00:58:41.554418 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:58:41.554424 | orchestrator | 2026-03-13 00:58:41.554434 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-13 00:58:41.554442 | orchestrator | Friday 13 March 2026 00:57:31 +0000 (0:00:00.120) 0:00:11.942 ********** 2026-03-13 00:58:41.554448 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:58:41.554453 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:58:41.554459 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:58:41.554466 | orchestrator | 2026-03-13 00:58:41.554484 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-03-13 00:58:41.554491 | orchestrator | Friday 13 March 2026 00:57:31 +0000 (0:00:00.452) 0:00:12.394 ********** 2026-03-13 00:58:41.554501 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:58:41.554507 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:58:41.554513 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:58:41.554519 | orchestrator | 2026-03-13 00:58:41.554526 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-03-13 00:58:41.554532 | orchestrator | Friday 13 March 2026 00:57:33 +0000 (0:00:01.699) 0:00:14.094 ********** 2026-03-13 00:58:41.554538 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-13 00:58:41.554546 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-13 00:58:41.554554 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-13 00:58:41.554562 | orchestrator | 2026-03-13 00:58:41.554569 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-03-13 00:58:41.554575 | orchestrator | Friday 13 March 2026 00:57:35 +0000 (0:00:01.659) 0:00:15.754 ********** 2026-03-13 00:58:41.554581 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-13 00:58:41.554588 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-13 00:58:41.554594 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-13 00:58:41.554600 | orchestrator | 2026-03-13 00:58:41.554606 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-03-13 00:58:41.554624 | orchestrator | Friday 13 March 2026 00:57:37 +0000 (0:00:02.089) 0:00:17.843 ********** 2026-03-13 00:58:41.554631 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-13 00:58:41.554638 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-13 00:58:41.554646 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-13 00:58:41.554667 | orchestrator | 2026-03-13 00:58:41.554673 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-03-13 00:58:41.554679 | orchestrator | Friday 13 March 2026 00:57:39 +0000 (0:00:01.924) 0:00:19.767 ********** 2026-03-13 00:58:41.554685 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:58:41.554691 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:58:41.554697 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:58:41.554704 | orchestrator | 2026-03-13 00:58:41.554710 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-03-13 00:58:41.554721 | orchestrator | Friday 13 March 2026 00:57:39 +0000 (0:00:00.262) 0:00:20.030 ********** 2026-03-13 00:58:41.554728 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:58:41.554734 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:58:41.554741 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:58:41.554747 | orchestrator | 2026-03-13 00:58:41.554753 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-13 00:58:41.554759 | orchestrator | Friday 13 March 2026 00:57:39 +0000 (0:00:00.269) 0:00:20.299 ********** 2026-03-13 00:58:41.554765 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:58:41.554771 | orchestrator | 2026-03-13 00:58:41.554777 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-03-13 00:58:41.554784 | orchestrator | Friday 13 March 2026 00:57:40 +0000 (0:00:00.623) 0:00:20.922 ********** 2026-03-13 00:58:41.554794 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-13 00:58:41.554817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-13 00:58:41.554832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-13 00:58:41.554839 | orchestrator | 2026-03-13 00:58:41.554847 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-03-13 00:58:41.554857 | orchestrator | Friday 13 March 2026 00:57:41 +0000 (0:00:01.534) 0:00:22.457 ********** 2026-03-13 00:58:41.554872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-13 00:58:41.554878 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:58:41.554886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-13 00:58:41.554894 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:58:41.554903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-13 00:58:41.554911 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:58:41.554917 | orchestrator | 2026-03-13 00:58:41.554924 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-03-13 00:58:41.554930 | orchestrator | Friday 13 March 2026 00:57:42 +0000 (0:00:00.560) 0:00:23.018 ********** 2026-03-13 00:58:41.554942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-13 00:58:41.554957 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:58:41.554965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-13 00:58:41.554969 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:58:41.555000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-13 00:58:41.555013 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:58:41.555021 | orchestrator | 2026-03-13 00:58:41.555029 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-03-13 00:58:41.555036 | orchestrator | Friday 13 March 2026 00:57:43 +0000 (0:00:00.774) 0:00:23.793 ********** 2026-03-13 00:58:41.555042 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-13 00:58:41.555058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-13 00:58:41.555071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-13 00:58:41.555082 | orchestrator | 2026-03-13 00:58:41.555090 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-13 00:58:41.555099 | orchestrator | Friday 13 March 2026 00:57:44 +0000 (0:00:01.322) 0:00:25.115 ********** 2026-03-13 00:58:41.555105 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:58:41.555110 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:58:41.555116 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:58:41.555122 | orchestrator | 2026-03-13 00:58:41.555128 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-13 00:58:41.555133 | orchestrator | Friday 13 March 2026 00:57:44 +0000 (0:00:00.264) 0:00:25.380 ********** 2026-03-13 00:58:41.555139 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:58:41.555145 | orchestrator | 2026-03-13 00:58:41.555150 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-03-13 00:58:41.555162 | orchestrator | Friday 13 March 2026 00:57:45 +0000 (0:00:00.499) 0:00:25.880 ********** 2026-03-13 00:58:41.555166 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:58:41.555170 | orchestrator | 2026-03-13 00:58:41.555174 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-03-13 00:58:41.555177 | orchestrator | Friday 13 March 2026 00:57:47 +0000 (0:00:02.425) 0:00:28.305 ********** 2026-03-13 00:58:41.555181 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:58:41.555185 | orchestrator | 2026-03-13 00:58:41.555188 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-03-13 00:58:41.555192 | orchestrator | Friday 13 March 2026 00:57:50 +0000 (0:00:02.529) 0:00:30.834 ********** 2026-03-13 00:58:41.555196 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:58:41.555199 | orchestrator | 2026-03-13 00:58:41.555205 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-13 00:58:41.555211 | orchestrator | Friday 13 March 2026 00:58:04 +0000 (0:00:14.356) 0:00:45.190 ********** 2026-03-13 00:58:41.555218 | orchestrator | 2026-03-13 00:58:41.555224 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-13 00:58:41.555230 | orchestrator | Friday 13 March 2026 00:58:04 +0000 (0:00:00.064) 0:00:45.255 ********** 2026-03-13 00:58:41.555237 | orchestrator | 2026-03-13 00:58:41.555249 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-13 00:58:41.555255 | orchestrator | Friday 13 March 2026 00:58:04 +0000 (0:00:00.062) 0:00:45.317 ********** 2026-03-13 00:58:41.555259 | orchestrator | 2026-03-13 00:58:41.555263 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-03-13 00:58:41.555266 | orchestrator | Friday 13 March 2026 00:58:04 +0000 (0:00:00.065) 0:00:45.383 ********** 2026-03-13 00:58:41.555270 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:58:41.555274 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:58:41.555277 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:58:41.555281 | orchestrator | 2026-03-13 00:58:41.555285 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 00:58:41.555288 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-13 00:58:41.555294 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-13 00:58:41.555301 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-13 00:58:41.555307 | orchestrator | 2026-03-13 00:58:41.555313 | orchestrator | 2026-03-13 00:58:41.555319 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 00:58:41.555331 | orchestrator | Friday 13 March 2026 00:58:40 +0000 (0:00:35.948) 0:01:21.331 ********** 2026-03-13 00:58:41.555337 | orchestrator | =============================================================================== 2026-03-13 00:58:41.555343 | orchestrator | horizon : Restart horizon container ------------------------------------ 35.95s 2026-03-13 00:58:41.555349 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 14.36s 2026-03-13 00:58:41.555355 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.53s 2026-03-13 00:58:41.555362 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.43s 2026-03-13 00:58:41.555370 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.09s 2026-03-13 00:58:41.555378 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.92s 2026-03-13 00:58:41.555383 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.70s 2026-03-13 00:58:41.555389 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.66s 2026-03-13 00:58:41.555395 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.53s 2026-03-13 00:58:41.555401 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.32s 2026-03-13 00:58:41.555407 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.29s 2026-03-13 00:58:41.555413 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.77s 2026-03-13 00:58:41.555420 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.77s 2026-03-13 00:58:41.555425 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.62s 2026-03-13 00:58:41.555431 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.56s 2026-03-13 00:58:41.555437 | orchestrator | horizon : Update policy file name --------------------------------------- 0.54s 2026-03-13 00:58:41.555443 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.50s 2026-03-13 00:58:41.555449 | orchestrator | horizon : Update policy file name --------------------------------------- 0.49s 2026-03-13 00:58:41.555455 | orchestrator | horizon : Update policy file name --------------------------------------- 0.49s 2026-03-13 00:58:41.555461 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.49s 2026-03-13 00:58:41.555466 | orchestrator | 2026-03-13 00:58:41 | INFO  | Task 52ff5064-272a-4a3f-b0f5-a223baabdd01 is in state STARTED 2026-03-13 00:58:41.555483 | orchestrator | 2026-03-13 00:58:41 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:58:44.588838 | orchestrator | 2026-03-13 00:58:44 | INFO  | Task 9e3c0dc2-27f9-4133-a7e6-aef4b260d5d5 is in state STARTED 2026-03-13 00:58:44.590007 | orchestrator | 2026-03-13 00:58:44 | INFO  | Task 52ff5064-272a-4a3f-b0f5-a223baabdd01 is in state STARTED 2026-03-13 00:58:44.590124 | orchestrator | 2026-03-13 00:58:44 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:58:47.633710 | orchestrator | 2026-03-13 00:58:47 | INFO  | Task 9e3c0dc2-27f9-4133-a7e6-aef4b260d5d5 is in state STARTED 2026-03-13 00:58:47.634639 | orchestrator | 2026-03-13 00:58:47 | INFO  | Task 52ff5064-272a-4a3f-b0f5-a223baabdd01 is in state STARTED 2026-03-13 00:58:47.634868 | orchestrator | 2026-03-13 00:58:47 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:58:50.682083 | orchestrator | 2026-03-13 00:58:50 | INFO  | Task 9e3c0dc2-27f9-4133-a7e6-aef4b260d5d5 is in state STARTED 2026-03-13 00:58:50.683999 | orchestrator | 2026-03-13 00:58:50 | INFO  | Task 52ff5064-272a-4a3f-b0f5-a223baabdd01 is in state STARTED 2026-03-13 00:58:50.684055 | orchestrator | 2026-03-13 00:58:50 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:58:53.732547 | orchestrator | 2026-03-13 00:58:53 | INFO  | Task a94401c5-2226-41b1-ad04-413573929ddb is in state STARTED 2026-03-13 00:58:53.734573 | orchestrator | 2026-03-13 00:58:53 | INFO  | Task 9e3c0dc2-27f9-4133-a7e6-aef4b260d5d5 is in state STARTED 2026-03-13 00:58:53.738798 | orchestrator | 2026-03-13 00:58:53 | INFO  | Task 52ff5064-272a-4a3f-b0f5-a223baabdd01 is in state SUCCESS 2026-03-13 00:58:53.738913 | orchestrator | 2026-03-13 00:58:53.740763 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-13 00:58:53.740822 | orchestrator | 2.16.14 2026-03-13 00:58:53.740832 | orchestrator | 2026-03-13 00:58:53.740839 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-03-13 00:58:53.740845 | orchestrator | 2026-03-13 00:58:53.740852 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-13 00:58:53.740858 | orchestrator | Friday 13 March 2026 00:56:53 +0000 (0:00:00.558) 0:00:00.558 ********** 2026-03-13 00:58:53.740864 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-13 00:58:53.740872 | orchestrator | 2026-03-13 00:58:53.740878 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-13 00:58:53.740885 | orchestrator | Friday 13 March 2026 00:56:53 +0000 (0:00:00.584) 0:00:01.142 ********** 2026-03-13 00:58:53.740891 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:58:53.740898 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:58:53.740904 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:58:53.740910 | orchestrator | 2026-03-13 00:58:53.740916 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-13 00:58:53.740922 | orchestrator | Friday 13 March 2026 00:56:54 +0000 (0:00:00.620) 0:00:01.763 ********** 2026-03-13 00:58:53.740976 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:58:53.740982 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:58:53.740988 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:58:53.740994 | orchestrator | 2026-03-13 00:58:53.741000 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-13 00:58:53.741007 | orchestrator | Friday 13 March 2026 00:56:54 +0000 (0:00:00.265) 0:00:02.028 ********** 2026-03-13 00:58:53.741013 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:58:53.741069 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:58:53.741079 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:58:53.741085 | orchestrator | 2026-03-13 00:58:53.741091 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-13 00:58:53.741097 | orchestrator | Friday 13 March 2026 00:56:55 +0000 (0:00:00.712) 0:00:02.740 ********** 2026-03-13 00:58:53.741103 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:58:53.741109 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:58:53.741114 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:58:53.741119 | orchestrator | 2026-03-13 00:58:53.741157 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-13 00:58:53.741165 | orchestrator | Friday 13 March 2026 00:56:55 +0000 (0:00:00.259) 0:00:03.000 ********** 2026-03-13 00:58:53.741171 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:58:53.741178 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:58:53.741184 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:58:53.741190 | orchestrator | 2026-03-13 00:58:53.741196 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-13 00:58:53.741202 | orchestrator | Friday 13 March 2026 00:56:55 +0000 (0:00:00.286) 0:00:03.286 ********** 2026-03-13 00:58:53.741302 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:58:53.741323 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:58:53.741331 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:58:53.741337 | orchestrator | 2026-03-13 00:58:53.741343 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-13 00:58:53.741352 | orchestrator | Friday 13 March 2026 00:56:56 +0000 (0:00:00.294) 0:00:03.581 ********** 2026-03-13 00:58:53.741358 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:58:53.741366 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:58:53.741372 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:58:53.741393 | orchestrator | 2026-03-13 00:58:53.741397 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-13 00:58:53.741402 | orchestrator | Friday 13 March 2026 00:56:56 +0000 (0:00:00.406) 0:00:03.987 ********** 2026-03-13 00:58:53.741406 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:58:53.741409 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:58:53.741413 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:58:53.741417 | orchestrator | 2026-03-13 00:58:53.741421 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-13 00:58:53.741425 | orchestrator | Friday 13 March 2026 00:56:56 +0000 (0:00:00.273) 0:00:04.261 ********** 2026-03-13 00:58:53.741430 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-13 00:58:53.741434 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-13 00:58:53.741438 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-13 00:58:53.741441 | orchestrator | 2026-03-13 00:58:53.741445 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-13 00:58:53.742174 | orchestrator | Friday 13 March 2026 00:56:57 +0000 (0:00:00.527) 0:00:04.788 ********** 2026-03-13 00:58:53.742239 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:58:53.742249 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:58:53.742255 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:58:53.742259 | orchestrator | 2026-03-13 00:58:53.742264 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-13 00:58:53.742268 | orchestrator | Friday 13 March 2026 00:56:57 +0000 (0:00:00.374) 0:00:05.163 ********** 2026-03-13 00:58:53.742272 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-13 00:58:53.742276 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-13 00:58:53.742280 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-13 00:58:53.742284 | orchestrator | 2026-03-13 00:58:53.742301 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-13 00:58:53.742305 | orchestrator | Friday 13 March 2026 00:56:59 +0000 (0:00:01.818) 0:00:06.982 ********** 2026-03-13 00:58:53.742309 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-13 00:58:53.742313 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-13 00:58:53.742317 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-13 00:58:53.742322 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:58:53.742328 | orchestrator | 2026-03-13 00:58:53.742412 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-13 00:58:53.742421 | orchestrator | Friday 13 March 2026 00:56:59 +0000 (0:00:00.526) 0:00:07.509 ********** 2026-03-13 00:58:53.742427 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-13 00:58:53.742433 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-13 00:58:53.742437 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-13 00:58:53.742441 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:58:53.742446 | orchestrator | 2026-03-13 00:58:53.742471 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-13 00:58:53.742478 | orchestrator | Friday 13 March 2026 00:57:00 +0000 (0:00:00.700) 0:00:08.209 ********** 2026-03-13 00:58:53.742485 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-13 00:58:53.742513 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-13 00:58:53.742520 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-13 00:58:53.742527 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:58:53.742533 | orchestrator | 2026-03-13 00:58:53.742539 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-13 00:58:53.742546 | orchestrator | Friday 13 March 2026 00:57:00 +0000 (0:00:00.272) 0:00:08.482 ********** 2026-03-13 00:58:53.742554 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '79dd92007179', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-13 00:56:58.193337', 'end': '2026-03-13 00:56:58.227476', 'delta': '0:00:00.034139', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['79dd92007179'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-13 00:58:53.742570 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '49db89e4ffa7', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-13 00:56:58.813284', 'end': '2026-03-13 00:56:58.838490', 'delta': '0:00:00.025206', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['49db89e4ffa7'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-13 00:58:53.742600 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'ccbf70212750', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-13 00:56:59.272065', 'end': '2026-03-13 00:56:59.299725', 'delta': '0:00:00.027660', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['ccbf70212750'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-13 00:58:53.742607 | orchestrator | 2026-03-13 00:58:53.742614 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-13 00:58:53.742627 | orchestrator | Friday 13 March 2026 00:57:01 +0000 (0:00:00.170) 0:00:08.653 ********** 2026-03-13 00:58:53.742634 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:58:53.742640 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:58:53.742647 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:58:53.742654 | orchestrator | 2026-03-13 00:58:53.742660 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-13 00:58:53.742667 | orchestrator | Friday 13 March 2026 00:57:01 +0000 (0:00:00.395) 0:00:09.048 ********** 2026-03-13 00:58:53.742672 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-03-13 00:58:53.742676 | orchestrator | 2026-03-13 00:58:53.742680 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-13 00:58:53.742684 | orchestrator | Friday 13 March 2026 00:57:03 +0000 (0:00:01.620) 0:00:10.669 ********** 2026-03-13 00:58:53.742687 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:58:53.742691 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:58:53.742695 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:58:53.742702 | orchestrator | 2026-03-13 00:58:53.742708 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-13 00:58:53.742714 | orchestrator | Friday 13 March 2026 00:57:03 +0000 (0:00:00.272) 0:00:10.942 ********** 2026-03-13 00:58:53.742720 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:58:53.742726 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:58:53.742732 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:58:53.742738 | orchestrator | 2026-03-13 00:58:53.742744 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-13 00:58:53.742750 | orchestrator | Friday 13 March 2026 00:57:03 +0000 (0:00:00.339) 0:00:11.282 ********** 2026-03-13 00:58:53.742756 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:58:53.742762 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:58:53.742768 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:58:53.742773 | orchestrator | 2026-03-13 00:58:53.742780 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-13 00:58:53.742786 | orchestrator | Friday 13 March 2026 00:57:04 +0000 (0:00:00.418) 0:00:11.700 ********** 2026-03-13 00:58:53.742792 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:58:53.742799 | orchestrator | 2026-03-13 00:58:53.742805 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-13 00:58:53.742811 | orchestrator | Friday 13 March 2026 00:57:04 +0000 (0:00:00.110) 0:00:11.811 ********** 2026-03-13 00:58:53.742817 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:58:53.742823 | orchestrator | 2026-03-13 00:58:53.742829 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-13 00:58:53.742836 | orchestrator | Friday 13 March 2026 00:57:04 +0000 (0:00:00.207) 0:00:12.019 ********** 2026-03-13 00:58:53.742842 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:58:53.742849 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:58:53.742855 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:58:53.742861 | orchestrator | 2026-03-13 00:58:53.742867 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-13 00:58:53.742873 | orchestrator | Friday 13 March 2026 00:57:04 +0000 (0:00:00.262) 0:00:12.281 ********** 2026-03-13 00:58:53.742879 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:58:53.742885 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:58:53.742893 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:58:53.742901 | orchestrator | 2026-03-13 00:58:53.742908 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-13 00:58:53.742914 | orchestrator | Friday 13 March 2026 00:57:05 +0000 (0:00:00.319) 0:00:12.600 ********** 2026-03-13 00:58:53.742919 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:58:53.742926 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:58:53.742932 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:58:53.742938 | orchestrator | 2026-03-13 00:58:53.742944 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-13 00:58:53.742956 | orchestrator | Friday 13 March 2026 00:57:05 +0000 (0:00:00.412) 0:00:13.013 ********** 2026-03-13 00:58:53.742963 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:58:53.742970 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:58:53.742976 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:58:53.742982 | orchestrator | 2026-03-13 00:58:53.742988 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-13 00:58:53.742994 | orchestrator | Friday 13 March 2026 00:57:05 +0000 (0:00:00.277) 0:00:13.290 ********** 2026-03-13 00:58:53.743000 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:58:53.743014 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:58:53.743020 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:58:53.743026 | orchestrator | 2026-03-13 00:58:53.743033 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-13 00:58:53.743058 | orchestrator | Friday 13 March 2026 00:57:06 +0000 (0:00:00.295) 0:00:13.586 ********** 2026-03-13 00:58:53.743064 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:58:53.743071 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:58:53.743078 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:58:53.743084 | orchestrator | 2026-03-13 00:58:53.743121 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-13 00:58:53.743129 | orchestrator | Friday 13 March 2026 00:57:06 +0000 (0:00:00.293) 0:00:13.880 ********** 2026-03-13 00:58:53.743136 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:58:53.743143 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:58:53.743149 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:58:53.743155 | orchestrator | 2026-03-13 00:58:53.743161 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-13 00:58:53.743168 | orchestrator | Friday 13 March 2026 00:57:06 +0000 (0:00:00.408) 0:00:14.288 ********** 2026-03-13 00:58:53.743176 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b5494c86--4b11--53e5--88ab--5da9d8a68a1e-osd--block--b5494c86--4b11--53e5--88ab--5da9d8a68a1e', 'dm-uuid-LVM-LPHl0YzeI6FamkHwpYfPFLYvA4jefdeLB0n60KxVDZol4Rt6ZGCDu50Tpw7xyBAY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-13 00:58:53.743184 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b7299377--1bbd--5436--9d58--2dd820a08a2f-osd--block--b7299377--1bbd--5436--9d58--2dd820a08a2f', 'dm-uuid-LVM-HoHeTclBr30fca9ZNFZuhsY6pk6aA3QcxtHyPjIk3J5AIumWTBgltxaGzq9CnrMA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-13 00:58:53.743191 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:58:53.743198 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:58:53.743211 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:58:53.743218 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:58:53.743229 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:58:53.743259 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:58:53.743268 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:58:53.743274 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:58:53.743285 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_97f75f20-579f-4518-b7ce-4d90969f977d', 'scsi-SQEMU_QEMU_HARDDISK_97f75f20-579f-4518-b7ce-4d90969f977d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_97f75f20-579f-4518-b7ce-4d90969f977d-part1', 'scsi-SQEMU_QEMU_HARDDISK_97f75f20-579f-4518-b7ce-4d90969f977d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_97f75f20-579f-4518-b7ce-4d90969f977d-part14', 'scsi-SQEMU_QEMU_HARDDISK_97f75f20-579f-4518-b7ce-4d90969f977d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_97f75f20-579f-4518-b7ce-4d90969f977d-part15', 'scsi-SQEMU_QEMU_HARDDISK_97f75f20-579f-4518-b7ce-4d90969f977d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_97f75f20-579f-4518-b7ce-4d90969f977d-part16', 'scsi-SQEMU_QEMU_HARDDISK_97f75f20-579f-4518-b7ce-4d90969f977d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-13 00:58:53.743301 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--49707cb0--36ac--571b--bf56--7288c46886ca-osd--block--49707cb0--36ac--571b--bf56--7288c46886ca', 'dm-uuid-LVM-lUuxAlRxeDpKHFR330Fw0ajQMZxdmGdFcZe0ZY3SvPyxgqFjJLxezDxIRmkhNvve'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-13 00:58:53.743333 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--b5494c86--4b11--53e5--88ab--5da9d8a68a1e-osd--block--b5494c86--4b11--53e5--88ab--5da9d8a68a1e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-LXpqTL-G1pt-XewF-Zt4p-vrnA-Ynye-ARUN64', 'scsi-0QEMU_QEMU_HARDDISK_0e74aa03-0dd7-4a4d-94fb-6534b2ad29b5', 'scsi-SQEMU_QEMU_HARDDISK_0e74aa03-0dd7-4a4d-94fb-6534b2ad29b5'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-13 00:58:53.743341 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--798cee0b--732e--51b2--a8a3--29d8c2932297-osd--block--798cee0b--732e--51b2--a8a3--29d8c2932297', 'dm-uuid-LVM-L1OBBH0k0D00ZH0dN8uE5pJTqoWU0KZEfPq0LLMud7Q5AWoDnaD4QV1JonD11yi2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-13 00:58:53.743349 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b7299377--1bbd--5436--9d58--2dd820a08a2f-osd--block--b7299377--1bbd--5436--9d58--2dd820a08a2f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xU3y9w-lno0-fYYl-h6C2-Bafl-jXiW-zSsbBh', 'scsi-0QEMU_QEMU_HARDDISK_e5b6d572-8591-43aa-97f9-3b718c2d248a', 'scsi-SQEMU_QEMU_HARDDISK_e5b6d572-8591-43aa-97f9-3b718c2d248a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-13 00:58:53.743356 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:58:53.743368 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b47ce045-806c-4f33-b887-31de2316680c', 'scsi-SQEMU_QEMU_HARDDISK_b47ce045-806c-4f33-b887-31de2316680c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-13 00:58:53.743376 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:58:53.743387 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-13-00-03-11-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-13 00:58:53.743394 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:58:53.743422 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:58:53.743429 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:58:53.743436 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:58:53.743443 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:58:53.743449 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:58:53.743506 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--119e494c--61db--56d2--84c4--ae65d8356f6a-osd--block--119e494c--61db--56d2--84c4--ae65d8356f6a', 'dm-uuid-LVM-aKLT8JNCOXsBc0C1gwIdNTjLoGLtcq6z5t48Wuu2NVQ4Z0cbe51erZOUcnYreOLk'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-13 00:58:53.743514 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5854fe4a--6d96--56a2--8017--73d7ac8736b8-osd--block--5854fe4a--6d96--56a2--8017--73d7ac8736b8', 'dm-uuid-LVM-DanwlmyYXjv3W8jDd7gIIXAnF5dZXwutprgamNuSW6Fu1UsLU31ga3JUkWu8KPCy'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-13 00:58:53.743521 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:58:53.743532 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:58:53.743545 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b2c3f3a5-d054-4214-843e-d9b33fe0d233', 'scsi-SQEMU_QEMU_HARDDISK_b2c3f3a5-d054-4214-843e-d9b33fe0d233'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b2c3f3a5-d054-4214-843e-d9b33fe0d233-part1', 'scsi-SQEMU_QEMU_HARDDISK_b2c3f3a5-d054-4214-843e-d9b33fe0d233-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b2c3f3a5-d054-4214-843e-d9b33fe0d233-part14', 'scsi-SQEMU_QEMU_HARDDISK_b2c3f3a5-d054-4214-843e-d9b33fe0d233-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b2c3f3a5-d054-4214-843e-d9b33fe0d233-part15', 'scsi-SQEMU_QEMU_HARDDISK_b2c3f3a5-d054-4214-843e-d9b33fe0d233-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b2c3f3a5-d054-4214-843e-d9b33fe0d233-part16', 'scsi-SQEMU_QEMU_HARDDISK_b2c3f3a5-d054-4214-843e-d9b33fe0d233-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-13 00:58:53.743563 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:58:53.743571 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--49707cb0--36ac--571b--bf56--7288c46886ca-osd--block--49707cb0--36ac--571b--bf56--7288c46886ca'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JdLIXO-MqwE-lr4R-7jAl-Oajp-v9D3-BfnDcq', 'scsi-0QEMU_QEMU_HARDDISK_9a254b57-f2ae-4287-95a0-937fffba734e', 'scsi-SQEMU_QEMU_HARDDISK_9a254b57-f2ae-4287-95a0-937fffba734e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-13 00:58:53.743577 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:58:53.743594 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--798cee0b--732e--51b2--a8a3--29d8c2932297-osd--block--798cee0b--732e--51b2--a8a3--29d8c2932297'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-MNWIhf-nwp0-tXaF-WOrc-iNMC-u1FO-4vKX4g', 'scsi-0QEMU_QEMU_HARDDISK_5123065a-17ef-4227-8b29-db8d7701c704', 'scsi-SQEMU_QEMU_HARDDISK_5123065a-17ef-4227-8b29-db8d7701c704'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-13 00:58:53.743601 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:58:53.743607 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e49f76b8-3d49-472e-b9d5-6b475ff66b1a', 'scsi-SQEMU_QEMU_HARDDISK_e49f76b8-3d49-472e-b9d5-6b475ff66b1a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-13 00:58:53.743614 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:58:53.743626 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-13-00-03-28-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-13 00:58:53.743633 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:58:53.743640 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:58:53.743647 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:58:53.743653 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-13 00:58:53.743668 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50cdff76-9cd5-47b7-8bb7-718e614446bf', 'scsi-SQEMU_QEMU_HARDDISK_50cdff76-9cd5-47b7-8bb7-718e614446bf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50cdff76-9cd5-47b7-8bb7-718e614446bf-part1', 'scsi-SQEMU_QEMU_HARDDISK_50cdff76-9cd5-47b7-8bb7-718e614446bf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50cdff76-9cd5-47b7-8bb7-718e614446bf-part14', 'scsi-SQEMU_QEMU_HARDDISK_50cdff76-9cd5-47b7-8bb7-718e614446bf-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50cdff76-9cd5-47b7-8bb7-718e614446bf-part15', 'scsi-SQEMU_QEMU_HARDDISK_50cdff76-9cd5-47b7-8bb7-718e614446bf-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50cdff76-9cd5-47b7-8bb7-718e614446bf-part16', 'scsi-SQEMU_QEMU_HARDDISK_50cdff76-9cd5-47b7-8bb7-718e614446bf-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-13 00:58:53.743679 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--119e494c--61db--56d2--84c4--ae65d8356f6a-osd--block--119e494c--61db--56d2--84c4--ae65d8356f6a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-M47k6j-4Htg-8gMw-gFQx-rYEL-zlZr-SG96Cv', 'scsi-0QEMU_QEMU_HARDDISK_173613da-cd5d-4175-9e2e-faf4092bf0a3', 'scsi-SQEMU_QEMU_HARDDISK_173613da-cd5d-4175-9e2e-faf4092bf0a3'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-13 00:58:53.743687 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--5854fe4a--6d96--56a2--8017--73d7ac8736b8-osd--block--5854fe4a--6d96--56a2--8017--73d7ac8736b8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-nOtw0g-XnPm-13J8-zFZd-lk1r-0DqR-r1FckL', 'scsi-0QEMU_QEMU_HARDDISK_dc527160-e7af-4d74-be06-07ea7bd10a9b', 'scsi-SQEMU_QEMU_HARDDISK_dc527160-e7af-4d74-be06-07ea7bd10a9b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-13 00:58:53.743697 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2fefda09-8576-4844-bc1b-e9a7eb3ad8aa', 'scsi-SQEMU_QEMU_HARDDISK_2fefda09-8576-4844-bc1b-e9a7eb3ad8aa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-13 00:58:53.743709 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-13-00-03-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-13 00:58:53.743716 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:58:53.743722 | orchestrator | 2026-03-13 00:58:53.743729 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-13 00:58:53.743735 | orchestrator | Friday 13 March 2026 00:57:07 +0000 (0:00:00.508) 0:00:14.797 ********** 2026-03-13 00:58:53.743742 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b5494c86--4b11--53e5--88ab--5da9d8a68a1e-osd--block--b5494c86--4b11--53e5--88ab--5da9d8a68a1e', 'dm-uuid-LVM-LPHl0YzeI6FamkHwpYfPFLYvA4jefdeLB0n60KxVDZol4Rt6ZGCDu50Tpw7xyBAY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:58:53.743755 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b7299377--1bbd--5436--9d58--2dd820a08a2f-osd--block--b7299377--1bbd--5436--9d58--2dd820a08a2f', 'dm-uuid-LVM-HoHeTclBr30fca9ZNFZuhsY6pk6aA3QcxtHyPjIk3J5AIumWTBgltxaGzq9CnrMA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:58:53.743762 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:58:53.743768 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:58:53.743779 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:58:53.743792 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:58:53.743799 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:58:53.743810 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:58:53.743817 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:58:53.743823 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:58:53.743839 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_97f75f20-579f-4518-b7ce-4d90969f977d', 'scsi-SQEMU_QEMU_HARDDISK_97f75f20-579f-4518-b7ce-4d90969f977d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_97f75f20-579f-4518-b7ce-4d90969f977d-part1', 'scsi-SQEMU_QEMU_HARDDISK_97f75f20-579f-4518-b7ce-4d90969f977d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_97f75f20-579f-4518-b7ce-4d90969f977d-part14', 'scsi-SQEMU_QEMU_HARDDISK_97f75f20-579f-4518-b7ce-4d90969f977d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_97f75f20-579f-4518-b7ce-4d90969f977d-part15', 'scsi-SQEMU_QEMU_HARDDISK_97f75f20-579f-4518-b7ce-4d90969f977d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_97f75f20-579f-4518-b7ce-4d90969f977d-part16', 'scsi-SQEMU_QEMU_HARDDISK_97f75f20-579f-4518-b7ce-4d90969f977d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:58:53.743852 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--b5494c86--4b11--53e5--88ab--5da9d8a68a1e-osd--block--b5494c86--4b11--53e5--88ab--5da9d8a68a1e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-LXpqTL-G1pt-XewF-Zt4p-vrnA-Ynye-ARUN64', 'scsi-0QEMU_QEMU_HARDDISK_0e74aa03-0dd7-4a4d-94fb-6534b2ad29b5', 'scsi-SQEMU_QEMU_HARDDISK_0e74aa03-0dd7-4a4d-94fb-6534b2ad29b5'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:58:53.743861 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b7299377--1bbd--5436--9d58--2dd820a08a2f-osd--block--b7299377--1bbd--5436--9d58--2dd820a08a2f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xU3y9w-lno0-fYYl-h6C2-Bafl-jXiW-zSsbBh', 'scsi-0QEMU_QEMU_HARDDISK_e5b6d572-8591-43aa-97f9-3b718c2d248a', 'scsi-SQEMU_QEMU_HARDDISK_e5b6d572-8591-43aa-97f9-3b718c2d248a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:58:53.743867 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b47ce045-806c-4f33-b887-31de2316680c', 'scsi-SQEMU_QEMU_HARDDISK_b47ce045-806c-4f33-b887-31de2316680c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:58:53.743882 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-13-00-03-11-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:58:53.743890 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--49707cb0--36ac--571b--bf56--7288c46886ca-osd--block--49707cb0--36ac--571b--bf56--7288c46886ca', 'dm-uuid-LVM-lUuxAlRxeDpKHFR330Fw0ajQMZxdmGdFcZe0ZY3SvPyxgqFjJLxezDxIRmkhNvve'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:58:53.743900 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--798cee0b--732e--51b2--a8a3--29d8c2932297-osd--block--798cee0b--732e--51b2--a8a3--29d8c2932297', 'dm-uuid-LVM-L1OBBH0k0D00ZH0dN8uE5pJTqoWU0KZEfPq0LLMud7Q5AWoDnaD4QV1JonD11yi2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:58:53.743906 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:58:53.743913 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:58:53.743923 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:58:53.743935 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:58:53.743941 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:58:53.743952 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:58:53.743959 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:58:53.743965 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:58:53.743971 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:58:53.743989 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b2c3f3a5-d054-4214-843e-d9b33fe0d233', 'scsi-SQEMU_QEMU_HARDDISK_b2c3f3a5-d054-4214-843e-d9b33fe0d233'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b2c3f3a5-d054-4214-843e-d9b33fe0d233-part1', 'scsi-SQEMU_QEMU_HARDDISK_b2c3f3a5-d054-4214-843e-d9b33fe0d233-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b2c3f3a5-d054-4214-843e-d9b33fe0d233-part14', 'scsi-SQEMU_QEMU_HARDDISK_b2c3f3a5-d054-4214-843e-d9b33fe0d233-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b2c3f3a5-d054-4214-843e-d9b33fe0d233-part15', 'scsi-SQEMU_QEMU_HARDDISK_b2c3f3a5-d054-4214-843e-d9b33fe0d233-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b2c3f3a5-d054-4214-843e-d9b33fe0d233-part16', 'scsi-SQEMU_QEMU_HARDDISK_b2c3f3a5-d054-4214-843e-d9b33fe0d233-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:58:53.744000 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--49707cb0--36ac--571b--bf56--7288c46886ca-osd--block--49707cb0--36ac--571b--bf56--7288c46886ca'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JdLIXO-MqwE-lr4R-7jAl-Oajp-v9D3-BfnDcq', 'scsi-0QEMU_QEMU_HARDDISK_9a254b57-f2ae-4287-95a0-937fffba734e', 'scsi-SQEMU_QEMU_HARDDISK_9a254b57-f2ae-4287-95a0-937fffba734e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:58:53.744007 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--798cee0b--732e--51b2--a8a3--29d8c2932297-osd--block--798cee0b--732e--51b2--a8a3--29d8c2932297'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-MNWIhf-nwp0-tXaF-WOrc-iNMC-u1FO-4vKX4g', 'scsi-0QEMU_QEMU_HARDDISK_5123065a-17ef-4227-8b29-db8d7701c704', 'scsi-SQEMU_QEMU_HARDDISK_5123065a-17ef-4227-8b29-db8d7701c704'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:58:53.744013 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e49f76b8-3d49-472e-b9d5-6b475ff66b1a', 'scsi-SQEMU_QEMU_HARDDISK_e49f76b8-3d49-472e-b9d5-6b475ff66b1a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:58:53.744027 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-13-00-03-28-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:58:53.744040 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:58:53.744046 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--119e494c--61db--56d2--84c4--ae65d8356f6a-osd--block--119e494c--61db--56d2--84c4--ae65d8356f6a', 'dm-uuid-LVM-aKLT8JNCOXsBc0C1gwIdNTjLoGLtcq6z5t48Wuu2NVQ4Z0cbe51erZOUcnYreOLk'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:58:53.744052 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5854fe4a--6d96--56a2--8017--73d7ac8736b8-osd--block--5854fe4a--6d96--56a2--8017--73d7ac8736b8', 'dm-uuid-LVM-DanwlmyYXjv3W8jDd7gIIXAnF5dZXwutprgamNuSW6Fu1UsLU31ga3JUkWu8KPCy'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:58:53.744058 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:58:53.744065 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:58:53.744075 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:58:53.744086 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:58:53.744097 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:58:53.744104 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:58:53.744110 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:58:53.744116 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:58:53.744129 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50cdff76-9cd5-47b7-8bb7-718e614446bf', 'scsi-SQEMU_QEMU_HARDDISK_50cdff76-9cd5-47b7-8bb7-718e614446bf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50cdff76-9cd5-47b7-8bb7-718e614446bf-part1', 'scsi-SQEMU_QEMU_HARDDISK_50cdff76-9cd5-47b7-8bb7-718e614446bf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50cdff76-9cd5-47b7-8bb7-718e614446bf-part14', 'scsi-SQEMU_QEMU_HARDDISK_50cdff76-9cd5-47b7-8bb7-718e614446bf-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50cdff76-9cd5-47b7-8bb7-718e614446bf-part15', 'scsi-SQEMU_QEMU_HARDDISK_50cdff76-9cd5-47b7-8bb7-718e614446bf-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50cdff76-9cd5-47b7-8bb7-718e614446bf-part16', 'scsi-SQEMU_QEMU_HARDDISK_50cdff76-9cd5-47b7-8bb7-718e614446bf-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:58:53.744142 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--119e494c--61db--56d2--84c4--ae65d8356f6a-osd--block--119e494c--61db--56d2--84c4--ae65d8356f6a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-M47k6j-4Htg-8gMw-gFQx-rYEL-zlZr-SG96Cv', 'scsi-0QEMU_QEMU_HARDDISK_173613da-cd5d-4175-9e2e-faf4092bf0a3', 'scsi-SQEMU_QEMU_HARDDISK_173613da-cd5d-4175-9e2e-faf4092bf0a3'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:58:53.744149 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--5854fe4a--6d96--56a2--8017--73d7ac8736b8-osd--block--5854fe4a--6d96--56a2--8017--73d7ac8736b8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-nOtw0g-XnPm-13J8-zFZd-lk1r-0DqR-r1FckL', 'scsi-0QEMU_QEMU_HARDDISK_dc527160-e7af-4d74-be06-07ea7bd10a9b', 'scsi-SQEMU_QEMU_HARDDISK_dc527160-e7af-4d74-be06-07ea7bd10a9b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:58:53.744156 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2fefda09-8576-4844-bc1b-e9a7eb3ad8aa', 'scsi-SQEMU_QEMU_HARDDISK_2fefda09-8576-4844-bc1b-e9a7eb3ad8aa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:58:53.744168 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-13-00-03-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-13 00:58:53.744180 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:58:53.744187 | orchestrator | 2026-03-13 00:58:53.744193 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-13 00:58:53.744200 | orchestrator | Friday 13 March 2026 00:57:07 +0000 (0:00:00.679) 0:00:15.476 ********** 2026-03-13 00:58:53.744206 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:58:53.744213 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:58:53.744218 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:58:53.744224 | orchestrator | 2026-03-13 00:58:53.744230 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-13 00:58:53.744237 | orchestrator | Friday 13 March 2026 00:57:08 +0000 (0:00:00.704) 0:00:16.180 ********** 2026-03-13 00:58:53.744242 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:58:53.744248 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:58:53.744254 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:58:53.744260 | orchestrator | 2026-03-13 00:58:53.744267 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-13 00:58:53.744274 | orchestrator | Friday 13 March 2026 00:57:09 +0000 (0:00:00.411) 0:00:16.591 ********** 2026-03-13 00:58:53.744280 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:58:53.744286 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:58:53.744292 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:58:53.744298 | orchestrator | 2026-03-13 00:58:53.744304 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-13 00:58:53.744310 | orchestrator | Friday 13 March 2026 00:57:09 +0000 (0:00:00.683) 0:00:17.275 ********** 2026-03-13 00:58:53.744315 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:58:53.744320 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:58:53.744326 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:58:53.744332 | orchestrator | 2026-03-13 00:58:53.744339 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-13 00:58:53.744345 | orchestrator | Friday 13 March 2026 00:57:10 +0000 (0:00:00.267) 0:00:17.542 ********** 2026-03-13 00:58:53.744353 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:58:53.744359 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:58:53.744365 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:58:53.744371 | orchestrator | 2026-03-13 00:58:53.744377 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-13 00:58:53.744382 | orchestrator | Friday 13 March 2026 00:57:10 +0000 (0:00:00.367) 0:00:17.910 ********** 2026-03-13 00:58:53.744388 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:58:53.744394 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:58:53.744399 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:58:53.744405 | orchestrator | 2026-03-13 00:58:53.744412 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-13 00:58:53.744418 | orchestrator | Friday 13 March 2026 00:57:10 +0000 (0:00:00.400) 0:00:18.310 ********** 2026-03-13 00:58:53.744425 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-13 00:58:53.744431 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-13 00:58:53.744437 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-13 00:58:53.744443 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-13 00:58:53.744449 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-13 00:58:53.744477 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-13 00:58:53.744491 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-13 00:58:53.744497 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-13 00:58:53.744502 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-13 00:58:53.744508 | orchestrator | 2026-03-13 00:58:53.744515 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-13 00:58:53.744521 | orchestrator | Friday 13 March 2026 00:57:11 +0000 (0:00:00.749) 0:00:19.059 ********** 2026-03-13 00:58:53.744527 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-13 00:58:53.744533 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-13 00:58:53.744539 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-13 00:58:53.744546 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:58:53.744552 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-13 00:58:53.744558 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-13 00:58:53.744564 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-13 00:58:53.744569 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:58:53.744575 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-13 00:58:53.744581 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-13 00:58:53.744586 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-13 00:58:53.744592 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:58:53.744599 | orchestrator | 2026-03-13 00:58:53.744605 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-13 00:58:53.744610 | orchestrator | Friday 13 March 2026 00:57:11 +0000 (0:00:00.325) 0:00:19.385 ********** 2026-03-13 00:58:53.744623 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-13 00:58:53.744699 | orchestrator | 2026-03-13 00:58:53.744709 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-13 00:58:53.744717 | orchestrator | Friday 13 March 2026 00:57:12 +0000 (0:00:00.575) 0:00:19.960 ********** 2026-03-13 00:58:53.744752 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:58:53.744762 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:58:53.744768 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:58:53.744775 | orchestrator | 2026-03-13 00:58:53.744781 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-13 00:58:53.744788 | orchestrator | Friday 13 March 2026 00:57:12 +0000 (0:00:00.266) 0:00:20.226 ********** 2026-03-13 00:58:53.744795 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:58:53.744802 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:58:53.744808 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:58:53.744814 | orchestrator | 2026-03-13 00:58:53.744821 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-13 00:58:53.744851 | orchestrator | Friday 13 March 2026 00:57:12 +0000 (0:00:00.263) 0:00:20.490 ********** 2026-03-13 00:58:53.744858 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:58:53.744864 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:58:53.744871 | orchestrator | skipping: [testbed-node-5] 2026-03-13 00:58:53.744877 | orchestrator | 2026-03-13 00:58:53.744884 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-13 00:58:53.744890 | orchestrator | Friday 13 March 2026 00:57:13 +0000 (0:00:00.273) 0:00:20.763 ********** 2026-03-13 00:58:53.744897 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:58:53.744904 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:58:53.744911 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:58:53.744917 | orchestrator | 2026-03-13 00:58:53.744924 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-13 00:58:53.744931 | orchestrator | Friday 13 March 2026 00:57:13 +0000 (0:00:00.515) 0:00:21.279 ********** 2026-03-13 00:58:53.744946 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-13 00:58:53.744953 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-13 00:58:53.744959 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-13 00:58:53.744966 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:58:53.744973 | orchestrator | 2026-03-13 00:58:53.744979 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-13 00:58:53.744986 | orchestrator | Friday 13 March 2026 00:57:14 +0000 (0:00:00.320) 0:00:21.599 ********** 2026-03-13 00:58:53.744992 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-13 00:58:53.744998 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-13 00:58:53.745005 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-13 00:58:53.745011 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:58:53.745018 | orchestrator | 2026-03-13 00:58:53.745025 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-13 00:58:53.745031 | orchestrator | Friday 13 March 2026 00:57:14 +0000 (0:00:00.326) 0:00:21.925 ********** 2026-03-13 00:58:53.745037 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-13 00:58:53.745044 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-13 00:58:53.745051 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-13 00:58:53.745057 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:58:53.745063 | orchestrator | 2026-03-13 00:58:53.745070 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-13 00:58:53.745077 | orchestrator | Friday 13 March 2026 00:57:14 +0000 (0:00:00.319) 0:00:22.245 ********** 2026-03-13 00:58:53.745083 | orchestrator | ok: [testbed-node-3] 2026-03-13 00:58:53.745089 | orchestrator | ok: [testbed-node-4] 2026-03-13 00:58:53.745096 | orchestrator | ok: [testbed-node-5] 2026-03-13 00:58:53.745103 | orchestrator | 2026-03-13 00:58:53.745109 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-13 00:58:53.745116 | orchestrator | Friday 13 March 2026 00:57:15 +0000 (0:00:00.302) 0:00:22.547 ********** 2026-03-13 00:58:53.745122 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-13 00:58:53.745129 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-13 00:58:53.745136 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-13 00:58:53.745149 | orchestrator | 2026-03-13 00:58:53.745156 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-13 00:58:53.745162 | orchestrator | Friday 13 March 2026 00:57:15 +0000 (0:00:00.479) 0:00:23.027 ********** 2026-03-13 00:58:53.745169 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-13 00:58:53.745176 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-13 00:58:53.745182 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-13 00:58:53.745189 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-13 00:58:53.745195 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-13 00:58:53.745201 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-13 00:58:53.745208 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-13 00:58:53.745214 | orchestrator | 2026-03-13 00:58:53.745220 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-13 00:58:53.745226 | orchestrator | Friday 13 March 2026 00:57:16 +0000 (0:00:00.814) 0:00:23.842 ********** 2026-03-13 00:58:53.745233 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-13 00:58:53.745240 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-13 00:58:53.745254 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-13 00:58:53.745267 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-13 00:58:53.745274 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-13 00:58:53.745281 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-13 00:58:53.745292 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-13 00:58:53.745300 | orchestrator | 2026-03-13 00:58:53.745307 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-03-13 00:58:53.745314 | orchestrator | Friday 13 March 2026 00:57:17 +0000 (0:00:01.590) 0:00:25.432 ********** 2026-03-13 00:58:53.745321 | orchestrator | skipping: [testbed-node-3] 2026-03-13 00:58:53.745327 | orchestrator | skipping: [testbed-node-4] 2026-03-13 00:58:53.745334 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-03-13 00:58:53.745340 | orchestrator | 2026-03-13 00:58:53.745346 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-03-13 00:58:53.745353 | orchestrator | Friday 13 March 2026 00:57:18 +0000 (0:00:00.319) 0:00:25.752 ********** 2026-03-13 00:58:53.745362 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-13 00:58:53.745370 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-13 00:58:53.745378 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-13 00:58:53.745385 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-13 00:58:53.745392 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-13 00:58:53.745398 | orchestrator | 2026-03-13 00:58:53.745404 | orchestrator | TASK [generate keys] *********************************************************** 2026-03-13 00:58:53.745411 | orchestrator | Friday 13 March 2026 00:58:02 +0000 (0:00:44.745) 0:01:10.498 ********** 2026-03-13 00:58:53.745417 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-13 00:58:53.745424 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-13 00:58:53.745430 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-13 00:58:53.745437 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-13 00:58:53.745443 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-13 00:58:53.745449 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-13 00:58:53.745554 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-03-13 00:58:53.745562 | orchestrator | 2026-03-13 00:58:53.745569 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-03-13 00:58:53.745575 | orchestrator | Friday 13 March 2026 00:58:24 +0000 (0:00:21.362) 0:01:31.860 ********** 2026-03-13 00:58:53.745592 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-13 00:58:53.745599 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-13 00:58:53.745605 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-13 00:58:53.745612 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-13 00:58:53.745618 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-13 00:58:53.745625 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-13 00:58:53.745632 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-13 00:58:53.745638 | orchestrator | 2026-03-13 00:58:53.745645 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-03-13 00:58:53.745652 | orchestrator | Friday 13 March 2026 00:58:35 +0000 (0:00:11.055) 0:01:42.915 ********** 2026-03-13 00:58:53.745658 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-13 00:58:53.745669 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-13 00:58:53.745676 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-13 00:58:53.745682 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-13 00:58:53.745689 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-13 00:58:53.745701 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-13 00:58:53.745708 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-13 00:58:53.745714 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-13 00:58:53.745721 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-13 00:58:53.745727 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-13 00:58:53.745734 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-13 00:58:53.745740 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-13 00:58:53.745746 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-13 00:58:53.745753 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-13 00:58:53.745759 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-13 00:58:53.745765 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-13 00:58:53.745772 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-13 00:58:53.745778 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-13 00:58:53.745785 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-03-13 00:58:53.745791 | orchestrator | 2026-03-13 00:58:53.745797 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 00:58:53.745804 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-13 00:58:53.745812 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-13 00:58:53.745819 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-13 00:58:53.745825 | orchestrator | 2026-03-13 00:58:53.745831 | orchestrator | 2026-03-13 00:58:53.745838 | orchestrator | 2026-03-13 00:58:53.745844 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 00:58:53.745851 | orchestrator | Friday 13 March 2026 00:58:51 +0000 (0:00:15.734) 0:01:58.650 ********** 2026-03-13 00:58:53.745863 | orchestrator | =============================================================================== 2026-03-13 00:58:53.745869 | orchestrator | create openstack pool(s) ----------------------------------------------- 44.75s 2026-03-13 00:58:53.745876 | orchestrator | generate keys ---------------------------------------------------------- 21.36s 2026-03-13 00:58:53.745883 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 15.74s 2026-03-13 00:58:53.745889 | orchestrator | get keys from monitors ------------------------------------------------- 11.06s 2026-03-13 00:58:53.745895 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 1.82s 2026-03-13 00:58:53.745902 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.62s 2026-03-13 00:58:53.745910 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.59s 2026-03-13 00:58:53.745916 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.81s 2026-03-13 00:58:53.745923 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.75s 2026-03-13 00:58:53.745929 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.71s 2026-03-13 00:58:53.745936 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.70s 2026-03-13 00:58:53.745942 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.70s 2026-03-13 00:58:53.745949 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.68s 2026-03-13 00:58:53.745956 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.68s 2026-03-13 00:58:53.745962 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.62s 2026-03-13 00:58:53.745969 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.58s 2026-03-13 00:58:53.745976 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.58s 2026-03-13 00:58:53.745982 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.53s 2026-03-13 00:58:53.745989 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.53s 2026-03-13 00:58:53.745995 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.52s 2026-03-13 00:58:53.746002 | orchestrator | 2026-03-13 00:58:53 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:58:56.782836 | orchestrator | 2026-03-13 00:58:56 | INFO  | Task a94401c5-2226-41b1-ad04-413573929ddb is in state STARTED 2026-03-13 00:58:56.784884 | orchestrator | 2026-03-13 00:58:56 | INFO  | Task 9e3c0dc2-27f9-4133-a7e6-aef4b260d5d5 is in state STARTED 2026-03-13 00:58:56.784955 | orchestrator | 2026-03-13 00:58:56 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:58:59.825233 | orchestrator | 2026-03-13 00:58:59 | INFO  | Task a94401c5-2226-41b1-ad04-413573929ddb is in state STARTED 2026-03-13 00:58:59.827148 | orchestrator | 2026-03-13 00:58:59 | INFO  | Task 9e3c0dc2-27f9-4133-a7e6-aef4b260d5d5 is in state STARTED 2026-03-13 00:58:59.827201 | orchestrator | 2026-03-13 00:58:59 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:59:02.864901 | orchestrator | 2026-03-13 00:59:02 | INFO  | Task a94401c5-2226-41b1-ad04-413573929ddb is in state STARTED 2026-03-13 00:59:02.866300 | orchestrator | 2026-03-13 00:59:02 | INFO  | Task 9e3c0dc2-27f9-4133-a7e6-aef4b260d5d5 is in state STARTED 2026-03-13 00:59:02.866435 | orchestrator | 2026-03-13 00:59:02 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:59:05.912295 | orchestrator | 2026-03-13 00:59:05 | INFO  | Task a94401c5-2226-41b1-ad04-413573929ddb is in state STARTED 2026-03-13 00:59:05.913909 | orchestrator | 2026-03-13 00:59:05 | INFO  | Task 9e3c0dc2-27f9-4133-a7e6-aef4b260d5d5 is in state STARTED 2026-03-13 00:59:05.914563 | orchestrator | 2026-03-13 00:59:05 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:59:08.955346 | orchestrator | 2026-03-13 00:59:08 | INFO  | Task a94401c5-2226-41b1-ad04-413573929ddb is in state STARTED 2026-03-13 00:59:08.955655 | orchestrator | 2026-03-13 00:59:08 | INFO  | Task 9e3c0dc2-27f9-4133-a7e6-aef4b260d5d5 is in state STARTED 2026-03-13 00:59:08.955709 | orchestrator | 2026-03-13 00:59:08 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:59:12.003551 | orchestrator | 2026-03-13 00:59:12 | INFO  | Task a94401c5-2226-41b1-ad04-413573929ddb is in state STARTED 2026-03-13 00:59:12.005522 | orchestrator | 2026-03-13 00:59:12 | INFO  | Task 9e3c0dc2-27f9-4133-a7e6-aef4b260d5d5 is in state STARTED 2026-03-13 00:59:12.005589 | orchestrator | 2026-03-13 00:59:12 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:59:15.051249 | orchestrator | 2026-03-13 00:59:15 | INFO  | Task a94401c5-2226-41b1-ad04-413573929ddb is in state STARTED 2026-03-13 00:59:15.054388 | orchestrator | 2026-03-13 00:59:15 | INFO  | Task 9e3c0dc2-27f9-4133-a7e6-aef4b260d5d5 is in state STARTED 2026-03-13 00:59:15.055135 | orchestrator | 2026-03-13 00:59:15 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:59:18.102081 | orchestrator | 2026-03-13 00:59:18 | INFO  | Task a94401c5-2226-41b1-ad04-413573929ddb is in state STARTED 2026-03-13 00:59:18.102991 | orchestrator | 2026-03-13 00:59:18 | INFO  | Task 9e3c0dc2-27f9-4133-a7e6-aef4b260d5d5 is in state STARTED 2026-03-13 00:59:18.103135 | orchestrator | 2026-03-13 00:59:18 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:59:21.144841 | orchestrator | 2026-03-13 00:59:21 | INFO  | Task a94401c5-2226-41b1-ad04-413573929ddb is in state STARTED 2026-03-13 00:59:21.144905 | orchestrator | 2026-03-13 00:59:21 | INFO  | Task 9e3c0dc2-27f9-4133-a7e6-aef4b260d5d5 is in state STARTED 2026-03-13 00:59:21.144915 | orchestrator | 2026-03-13 00:59:21 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:59:24.194245 | orchestrator | 2026-03-13 00:59:24 | INFO  | Task a94401c5-2226-41b1-ad04-413573929ddb is in state STARTED 2026-03-13 00:59:24.196735 | orchestrator | 2026-03-13 00:59:24 | INFO  | Task 9e3c0dc2-27f9-4133-a7e6-aef4b260d5d5 is in state STARTED 2026-03-13 00:59:24.196780 | orchestrator | 2026-03-13 00:59:24 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:59:27.255515 | orchestrator | 2026-03-13 00:59:27 | INFO  | Task a94401c5-2226-41b1-ad04-413573929ddb is in state STARTED 2026-03-13 00:59:27.257160 | orchestrator | 2026-03-13 00:59:27 | INFO  | Task 9e3c0dc2-27f9-4133-a7e6-aef4b260d5d5 is in state STARTED 2026-03-13 00:59:27.257234 | orchestrator | 2026-03-13 00:59:27 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:59:30.310184 | orchestrator | 2026-03-13 00:59:30 | INFO  | Task a94401c5-2226-41b1-ad04-413573929ddb is in state SUCCESS 2026-03-13 00:59:30.310372 | orchestrator | 2026-03-13 00:59:30 | INFO  | Task 9e3c0dc2-27f9-4133-a7e6-aef4b260d5d5 is in state STARTED 2026-03-13 00:59:30.315336 | orchestrator | 2026-03-13 00:59:30 | INFO  | Task 5f72507f-a032-423b-866e-5cb7c0bce64d is in state STARTED 2026-03-13 00:59:30.315439 | orchestrator | 2026-03-13 00:59:30 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:59:33.363030 | orchestrator | 2026-03-13 00:59:33 | INFO  | Task 9e3c0dc2-27f9-4133-a7e6-aef4b260d5d5 is in state STARTED 2026-03-13 00:59:33.364516 | orchestrator | 2026-03-13 00:59:33 | INFO  | Task 5f72507f-a032-423b-866e-5cb7c0bce64d is in state STARTED 2026-03-13 00:59:33.364565 | orchestrator | 2026-03-13 00:59:33 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:59:36.415209 | orchestrator | 2026-03-13 00:59:36 | INFO  | Task 9e3c0dc2-27f9-4133-a7e6-aef4b260d5d5 is in state STARTED 2026-03-13 00:59:36.416464 | orchestrator | 2026-03-13 00:59:36 | INFO  | Task 5f72507f-a032-423b-866e-5cb7c0bce64d is in state STARTED 2026-03-13 00:59:36.416772 | orchestrator | 2026-03-13 00:59:36 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:59:39.462578 | orchestrator | 2026-03-13 00:59:39 | INFO  | Task 9e3c0dc2-27f9-4133-a7e6-aef4b260d5d5 is in state STARTED 2026-03-13 00:59:39.464355 | orchestrator | 2026-03-13 00:59:39 | INFO  | Task 5f72507f-a032-423b-866e-5cb7c0bce64d is in state STARTED 2026-03-13 00:59:39.464444 | orchestrator | 2026-03-13 00:59:39 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:59:42.511299 | orchestrator | 2026-03-13 00:59:42 | INFO  | Task 9e3c0dc2-27f9-4133-a7e6-aef4b260d5d5 is in state STARTED 2026-03-13 00:59:42.512853 | orchestrator | 2026-03-13 00:59:42 | INFO  | Task 5f72507f-a032-423b-866e-5cb7c0bce64d is in state STARTED 2026-03-13 00:59:42.513213 | orchestrator | 2026-03-13 00:59:42 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:59:45.560507 | orchestrator | 2026-03-13 00:59:45 | INFO  | Task 9e3c0dc2-27f9-4133-a7e6-aef4b260d5d5 is in state STARTED 2026-03-13 00:59:45.560565 | orchestrator | 2026-03-13 00:59:45 | INFO  | Task 5f72507f-a032-423b-866e-5cb7c0bce64d is in state STARTED 2026-03-13 00:59:45.562552 | orchestrator | 2026-03-13 00:59:45 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:59:48.593345 | orchestrator | 2026-03-13 00:59:48 | INFO  | Task f86e2e9e-0398-4f02-b2b4-c444328aca9a is in state STARTED 2026-03-13 00:59:48.596974 | orchestrator | 2026-03-13 00:59:48 | INFO  | Task d2a1643a-cae4-4022-8eec-ac1ee46ee703 is in state STARTED 2026-03-13 00:59:48.597057 | orchestrator | 2026-03-13 00:59:48 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 00:59:48.599800 | orchestrator | 2026-03-13 00:59:48 | INFO  | Task 9e3c0dc2-27f9-4133-a7e6-aef4b260d5d5 is in state SUCCESS 2026-03-13 00:59:48.601942 | orchestrator | 2026-03-13 00:59:48.602000 | orchestrator | 2026-03-13 00:59:48.602006 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-03-13 00:59:48.602043 | orchestrator | 2026-03-13 00:59:48.602050 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-03-13 00:59:48.602057 | orchestrator | Friday 13 March 2026 00:58:55 +0000 (0:00:00.139) 0:00:00.139 ********** 2026-03-13 00:59:48.602064 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-13 00:59:48.602071 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-13 00:59:48.602077 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-13 00:59:48.602084 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-13 00:59:48.602089 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-13 00:59:48.602095 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-13 00:59:48.602100 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-13 00:59:48.602106 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-13 00:59:48.602112 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-13 00:59:48.602117 | orchestrator | 2026-03-13 00:59:48.602123 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-03-13 00:59:48.602130 | orchestrator | Friday 13 March 2026 00:58:59 +0000 (0:00:04.800) 0:00:04.940 ********** 2026-03-13 00:59:48.602161 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-13 00:59:48.602170 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-13 00:59:48.602175 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-13 00:59:48.602181 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-13 00:59:48.602187 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-13 00:59:48.602195 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-13 00:59:48.602217 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-13 00:59:48.602223 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-13 00:59:48.602229 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-13 00:59:48.602235 | orchestrator | 2026-03-13 00:59:48.602240 | orchestrator | TASK [Create share directory] ************************************************** 2026-03-13 00:59:48.602246 | orchestrator | Friday 13 March 2026 00:59:03 +0000 (0:00:04.015) 0:00:08.955 ********** 2026-03-13 00:59:48.602253 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-13 00:59:48.602258 | orchestrator | 2026-03-13 00:59:48.602267 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-03-13 00:59:48.602274 | orchestrator | Friday 13 March 2026 00:59:04 +0000 (0:00:00.941) 0:00:09.896 ********** 2026-03-13 00:59:48.602280 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-03-13 00:59:48.602287 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-13 00:59:48.602293 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-13 00:59:48.602298 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-03-13 00:59:48.602304 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-13 00:59:48.602310 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-03-13 00:59:48.602316 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-03-13 00:59:48.602321 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-03-13 00:59:48.602327 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-03-13 00:59:48.602334 | orchestrator | 2026-03-13 00:59:48.602339 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-03-13 00:59:48.602345 | orchestrator | Friday 13 March 2026 00:59:18 +0000 (0:00:13.476) 0:00:23.372 ********** 2026-03-13 00:59:48.602351 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-03-13 00:59:48.602357 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-03-13 00:59:48.602363 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-13 00:59:48.602393 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-13 00:59:48.602410 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-13 00:59:48.602414 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-13 00:59:48.602418 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-03-13 00:59:48.602422 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-03-13 00:59:48.602432 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-03-13 00:59:48.602436 | orchestrator | 2026-03-13 00:59:48.602439 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-03-13 00:59:48.602443 | orchestrator | Friday 13 March 2026 00:59:21 +0000 (0:00:03.072) 0:00:26.445 ********** 2026-03-13 00:59:48.602448 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-03-13 00:59:48.602452 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-13 00:59:48.602455 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-13 00:59:48.602459 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-03-13 00:59:48.602463 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-13 00:59:48.602467 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-03-13 00:59:48.602470 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-03-13 00:59:48.602474 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-03-13 00:59:48.602478 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-03-13 00:59:48.602482 | orchestrator | 2026-03-13 00:59:48.602558 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 00:59:48.602564 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 00:59:48.602570 | orchestrator | 2026-03-13 00:59:48.602575 | orchestrator | 2026-03-13 00:59:48.602579 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 00:59:48.602583 | orchestrator | Friday 13 March 2026 00:59:28 +0000 (0:00:06.732) 0:00:33.177 ********** 2026-03-13 00:59:48.602588 | orchestrator | =============================================================================== 2026-03-13 00:59:48.602592 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.48s 2026-03-13 00:59:48.602596 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.73s 2026-03-13 00:59:48.602611 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.80s 2026-03-13 00:59:48.602616 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.02s 2026-03-13 00:59:48.602620 | orchestrator | Check if target directories exist --------------------------------------- 3.07s 2026-03-13 00:59:48.602624 | orchestrator | Create share directory -------------------------------------------------- 0.94s 2026-03-13 00:59:48.602628 | orchestrator | 2026-03-13 00:59:48.602631 | orchestrator | 2026-03-13 00:59:48.602635 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-13 00:59:48.602639 | orchestrator | 2026-03-13 00:59:48.602643 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-13 00:59:48.602646 | orchestrator | Friday 13 March 2026 00:57:19 +0000 (0:00:00.227) 0:00:00.227 ********** 2026-03-13 00:59:48.602650 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:59:48.602654 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:59:48.602658 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:59:48.602662 | orchestrator | 2026-03-13 00:59:48.602666 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-13 00:59:48.602669 | orchestrator | Friday 13 March 2026 00:57:19 +0000 (0:00:00.294) 0:00:00.522 ********** 2026-03-13 00:59:48.602673 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-13 00:59:48.602677 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-13 00:59:48.602681 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-13 00:59:48.602685 | orchestrator | 2026-03-13 00:59:48.602688 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-03-13 00:59:48.602692 | orchestrator | 2026-03-13 00:59:48.602696 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-13 00:59:48.602704 | orchestrator | Friday 13 March 2026 00:57:20 +0000 (0:00:00.462) 0:00:00.985 ********** 2026-03-13 00:59:48.602708 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:59:48.602712 | orchestrator | 2026-03-13 00:59:48.602716 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-03-13 00:59:48.602719 | orchestrator | Friday 13 March 2026 00:57:20 +0000 (0:00:00.555) 0:00:01.540 ********** 2026-03-13 00:59:48.603055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-13 00:59:48.603069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-13 00:59:48.603081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-13 00:59:48.603086 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-13 00:59:48.603102 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-13 00:59:48.603130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-13 00:59:48.603139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-13 00:59:48.603146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-13 00:59:48.603152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-13 00:59:48.603159 | orchestrator | 2026-03-13 00:59:48.603169 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-03-13 00:59:48.603175 | orchestrator | Friday 13 March 2026 00:57:22 +0000 (0:00:02.005) 0:00:03.545 ********** 2026-03-13 00:59:48.603181 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:59:48.603187 | orchestrator | 2026-03-13 00:59:48.603192 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-03-13 00:59:48.603199 | orchestrator | Friday 13 March 2026 00:57:23 +0000 (0:00:00.135) 0:00:03.681 ********** 2026-03-13 00:59:48.603204 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:59:48.603210 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:59:48.603222 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:59:48.603228 | orchestrator | 2026-03-13 00:59:48.603234 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-03-13 00:59:48.603238 | orchestrator | Friday 13 March 2026 00:57:23 +0000 (0:00:00.435) 0:00:04.116 ********** 2026-03-13 00:59:48.603242 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-13 00:59:48.603245 | orchestrator | 2026-03-13 00:59:48.603249 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-13 00:59:48.603253 | orchestrator | Friday 13 March 2026 00:57:24 +0000 (0:00:00.898) 0:00:05.015 ********** 2026-03-13 00:59:48.603258 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:59:48.603264 | orchestrator | 2026-03-13 00:59:48.603270 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-03-13 00:59:48.603276 | orchestrator | Friday 13 March 2026 00:57:24 +0000 (0:00:00.504) 0:00:05.519 ********** 2026-03-13 00:59:48.603288 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-13 00:59:48.603296 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-13 00:59:48.603306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-13 00:59:48.603319 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-13 00:59:48.603324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-13 00:59:48.603328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-13 00:59:48.603338 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-13 00:59:48.603342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-13 00:59:48.603346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-13 00:59:48.603350 | orchestrator | 2026-03-13 00:59:48.603485 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-03-13 00:59:48.603497 | orchestrator | Friday 13 March 2026 00:57:28 +0000 (0:00:03.233) 0:00:08.752 ********** 2026-03-13 00:59:48.603515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-13 00:59:48.603523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-13 00:59:48.603529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-13 00:59:48.603536 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:59:48.603577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-13 00:59:48.603586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-13 00:59:48.603597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-13 00:59:48.603601 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:59:48.603605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-13 00:59:48.603609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-13 00:59:48.603619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-13 00:59:48.603623 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:59:48.603627 | orchestrator | 2026-03-13 00:59:48.603631 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-03-13 00:59:48.603635 | orchestrator | Friday 13 March 2026 00:57:28 +0000 (0:00:00.585) 0:00:09.338 ********** 2026-03-13 00:59:48.603640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-13 00:59:48.603652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-13 00:59:48.603657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-13 00:59:48.603661 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:59:48.603666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-13 00:59:48.603675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-13 00:59:48.603680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-13 00:59:48.603684 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:59:48.603697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-13 00:59:48.603702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-13 00:59:48.603706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-13 00:59:48.603711 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:59:48.603715 | orchestrator | 2026-03-13 00:59:48.603720 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-03-13 00:59:48.603724 | orchestrator | Friday 13 March 2026 00:57:29 +0000 (0:00:00.746) 0:00:10.085 ********** 2026-03-13 00:59:48.603733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-13 00:59:48.603738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-13 00:59:48.603749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-13 00:59:48.603754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-13 00:59:48.603759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-13 00:59:48.603766 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-13 00:59:48.603770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-13 00:59:48.603777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-13 00:59:48.603783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-13 00:59:48.603787 | orchestrator | 2026-03-13 00:59:48.603791 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-03-13 00:59:48.603795 | orchestrator | Friday 13 March 2026 00:57:33 +0000 (0:00:03.877) 0:00:13.962 ********** 2026-03-13 00:59:48.603799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-13 00:59:48.603803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-13 00:59:48.603811 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-13 00:59:48.603819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-13 00:59:48.603827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-13 00:59:48.603831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-13 00:59:48.603836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-13 00:59:48.603841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-13 00:59:48.603846 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-13 00:59:48.603852 | orchestrator | 2026-03-13 00:59:48.603856 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-03-13 00:59:48.603860 | orchestrator | Friday 13 March 2026 00:57:38 +0000 (0:00:05.320) 0:00:19.282 ********** 2026-03-13 00:59:48.603864 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:59:48.603867 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:59:48.603871 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:59:48.603875 | orchestrator | 2026-03-13 00:59:48.603878 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-03-13 00:59:48.603882 | orchestrator | Friday 13 March 2026 00:57:40 +0000 (0:00:01.576) 0:00:20.859 ********** 2026-03-13 00:59:48.603886 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:59:48.603889 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:59:48.603893 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:59:48.603897 | orchestrator | 2026-03-13 00:59:48.603900 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-03-13 00:59:48.603904 | orchestrator | Friday 13 March 2026 00:57:40 +0000 (0:00:00.499) 0:00:21.358 ********** 2026-03-13 00:59:48.603908 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:59:48.603911 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:59:48.603915 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:59:48.603919 | orchestrator | 2026-03-13 00:59:48.603923 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-03-13 00:59:48.603926 | orchestrator | Friday 13 March 2026 00:57:41 +0000 (0:00:00.287) 0:00:21.646 ********** 2026-03-13 00:59:48.603930 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:59:48.603934 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:59:48.603937 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:59:48.604024 | orchestrator | 2026-03-13 00:59:48.604035 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-03-13 00:59:48.604041 | orchestrator | Friday 13 March 2026 00:57:41 +0000 (0:00:00.380) 0:00:22.027 ********** 2026-03-13 00:59:48.604053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-13 00:59:48.604060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-13 00:59:48.604077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-13 00:59:48.604084 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:59:48.604090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-13 00:59:48.604096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-13 00:59:48.604107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-13 00:59:48.604114 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:59:48.604121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-13 00:59:48.604132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-13 00:59:48.604144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-13 00:59:48.604150 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:59:48.604156 | orchestrator | 2026-03-13 00:59:48.604162 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-13 00:59:48.604168 | orchestrator | Friday 13 March 2026 00:57:41 +0000 (0:00:00.509) 0:00:22.537 ********** 2026-03-13 00:59:48.604174 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:59:48.604181 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:59:48.604187 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:59:48.604193 | orchestrator | 2026-03-13 00:59:48.604199 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-03-13 00:59:48.604205 | orchestrator | Friday 13 March 2026 00:57:42 +0000 (0:00:00.255) 0:00:22.792 ********** 2026-03-13 00:59:48.604211 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-13 00:59:48.604218 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-13 00:59:48.604222 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-13 00:59:48.604226 | orchestrator | 2026-03-13 00:59:48.604230 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-03-13 00:59:48.604234 | orchestrator | Friday 13 March 2026 00:57:43 +0000 (0:00:01.545) 0:00:24.337 ********** 2026-03-13 00:59:48.604237 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-13 00:59:48.604241 | orchestrator | 2026-03-13 00:59:48.604245 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-03-13 00:59:48.604248 | orchestrator | Friday 13 March 2026 00:57:44 +0000 (0:00:00.914) 0:00:25.252 ********** 2026-03-13 00:59:48.604252 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:59:48.604256 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:59:48.604260 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:59:48.604263 | orchestrator | 2026-03-13 00:59:48.604267 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-03-13 00:59:48.604271 | orchestrator | Friday 13 March 2026 00:57:45 +0000 (0:00:00.679) 0:00:25.931 ********** 2026-03-13 00:59:48.604277 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-13 00:59:48.604281 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-13 00:59:48.604285 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-13 00:59:48.604291 | orchestrator | 2026-03-13 00:59:48.604297 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-03-13 00:59:48.604303 | orchestrator | Friday 13 March 2026 00:57:46 +0000 (0:00:00.945) 0:00:26.876 ********** 2026-03-13 00:59:48.604314 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:59:48.604321 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:59:48.604326 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:59:48.604333 | orchestrator | 2026-03-13 00:59:48.604339 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-03-13 00:59:48.604346 | orchestrator | Friday 13 March 2026 00:57:46 +0000 (0:00:00.253) 0:00:27.129 ********** 2026-03-13 00:59:48.604352 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-13 00:59:48.604359 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-13 00:59:48.604547 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-13 00:59:48.604557 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-13 00:59:48.604562 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-13 00:59:48.604566 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-13 00:59:48.604570 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-13 00:59:48.604574 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-13 00:59:48.604578 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-13 00:59:48.604582 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-13 00:59:48.604585 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-13 00:59:48.604590 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-13 00:59:48.604596 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-13 00:59:48.604603 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-13 00:59:48.604618 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-13 00:59:48.604625 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-13 00:59:48.604632 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-13 00:59:48.604638 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-13 00:59:48.604644 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-13 00:59:48.604649 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-13 00:59:48.604655 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-13 00:59:48.604661 | orchestrator | 2026-03-13 00:59:48.604668 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-03-13 00:59:48.604674 | orchestrator | Friday 13 March 2026 00:57:54 +0000 (0:00:08.296) 0:00:35.426 ********** 2026-03-13 00:59:48.604681 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-13 00:59:48.604688 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-13 00:59:48.604695 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-13 00:59:48.604701 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-13 00:59:48.604708 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-13 00:59:48.604715 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-13 00:59:48.604727 | orchestrator | 2026-03-13 00:59:48.604732 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-03-13 00:59:48.604736 | orchestrator | Friday 13 March 2026 00:57:57 +0000 (0:00:02.589) 0:00:38.015 ********** 2026-03-13 00:59:48.604748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-13 00:59:48.604753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-13 00:59:48.604764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-13 00:59:48.604769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-13 00:59:48.604774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-13 00:59:48.604788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-13 00:59:48.604798 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-13 00:59:48.604808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-13 00:59:48.604814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-13 00:59:48.604820 | orchestrator | 2026-03-13 00:59:48.604830 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-13 00:59:48.604837 | orchestrator | Friday 13 March 2026 00:57:59 +0000 (0:00:02.088) 0:00:40.104 ********** 2026-03-13 00:59:48.604844 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:59:48.604850 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:59:48.604856 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:59:48.604862 | orchestrator | 2026-03-13 00:59:48.604868 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-03-13 00:59:48.604873 | orchestrator | Friday 13 March 2026 00:57:59 +0000 (0:00:00.246) 0:00:40.350 ********** 2026-03-13 00:59:48.604879 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:59:48.604885 | orchestrator | 2026-03-13 00:59:48.604891 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-03-13 00:59:48.604897 | orchestrator | Friday 13 March 2026 00:58:01 +0000 (0:00:02.008) 0:00:42.358 ********** 2026-03-13 00:59:48.604909 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:59:48.604915 | orchestrator | 2026-03-13 00:59:48.604921 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-03-13 00:59:48.604927 | orchestrator | Friday 13 March 2026 00:58:03 +0000 (0:00:02.021) 0:00:44.380 ********** 2026-03-13 00:59:48.604933 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:59:48.604940 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:59:48.604945 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:59:48.604949 | orchestrator | 2026-03-13 00:59:48.604953 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-03-13 00:59:48.604956 | orchestrator | Friday 13 March 2026 00:58:04 +0000 (0:00:00.835) 0:00:45.215 ********** 2026-03-13 00:59:48.604960 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:59:48.604964 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:59:48.604967 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:59:48.604971 | orchestrator | 2026-03-13 00:59:48.604975 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-03-13 00:59:48.604979 | orchestrator | Friday 13 March 2026 00:58:04 +0000 (0:00:00.273) 0:00:45.489 ********** 2026-03-13 00:59:48.604982 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:59:48.604986 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:59:48.604990 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:59:48.604994 | orchestrator | 2026-03-13 00:59:48.604997 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-03-13 00:59:48.605001 | orchestrator | Friday 13 March 2026 00:58:05 +0000 (0:00:00.518) 0:00:46.007 ********** 2026-03-13 00:59:48.605005 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:59:48.605008 | orchestrator | 2026-03-13 00:59:48.605012 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-03-13 00:59:48.605016 | orchestrator | Friday 13 March 2026 00:58:18 +0000 (0:00:12.882) 0:00:58.890 ********** 2026-03-13 00:59:48.605020 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:59:48.605024 | orchestrator | 2026-03-13 00:59:48.605028 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-13 00:59:48.605031 | orchestrator | Friday 13 March 2026 00:58:28 +0000 (0:00:10.120) 0:01:09.010 ********** 2026-03-13 00:59:48.605035 | orchestrator | 2026-03-13 00:59:48.605050 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-13 00:59:48.605054 | orchestrator | Friday 13 March 2026 00:58:28 +0000 (0:00:00.059) 0:01:09.069 ********** 2026-03-13 00:59:48.605058 | orchestrator | 2026-03-13 00:59:48.605062 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-13 00:59:48.605066 | orchestrator | Friday 13 March 2026 00:58:28 +0000 (0:00:00.061) 0:01:09.131 ********** 2026-03-13 00:59:48.605070 | orchestrator | 2026-03-13 00:59:48.605073 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-03-13 00:59:48.605077 | orchestrator | Friday 13 March 2026 00:58:28 +0000 (0:00:00.064) 0:01:09.196 ********** 2026-03-13 00:59:48.605081 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:59:48.605084 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:59:48.605088 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:59:48.605092 | orchestrator | 2026-03-13 00:59:48.605096 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-03-13 00:59:48.605100 | orchestrator | Friday 13 March 2026 00:58:37 +0000 (0:00:08.828) 0:01:18.024 ********** 2026-03-13 00:59:48.605104 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:59:48.605107 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:59:48.605111 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:59:48.605115 | orchestrator | 2026-03-13 00:59:48.605119 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-03-13 00:59:48.605123 | orchestrator | Friday 13 March 2026 00:58:41 +0000 (0:00:04.293) 0:01:22.317 ********** 2026-03-13 00:59:48.605126 | orchestrator | changed: [testbed-node-1] 2026-03-13 00:59:48.605130 | orchestrator | changed: [testbed-node-2] 2026-03-13 00:59:48.605138 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:59:48.605142 | orchestrator | 2026-03-13 00:59:48.605146 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-13 00:59:48.605150 | orchestrator | Friday 13 March 2026 00:58:48 +0000 (0:00:07.144) 0:01:29.462 ********** 2026-03-13 00:59:48.605154 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 00:59:48.605158 | orchestrator | 2026-03-13 00:59:48.605162 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-03-13 00:59:48.605165 | orchestrator | Friday 13 March 2026 00:58:49 +0000 (0:00:00.625) 0:01:30.088 ********** 2026-03-13 00:59:48.605169 | orchestrator | ok: [testbed-node-2] 2026-03-13 00:59:48.605174 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:59:48.605179 | orchestrator | ok: [testbed-node-1] 2026-03-13 00:59:48.605183 | orchestrator | 2026-03-13 00:59:48.605188 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-03-13 00:59:48.605192 | orchestrator | Friday 13 March 2026 00:58:50 +0000 (0:00:00.695) 0:01:30.783 ********** 2026-03-13 00:59:48.605196 | orchestrator | changed: [testbed-node-0] 2026-03-13 00:59:48.605201 | orchestrator | 2026-03-13 00:59:48.605205 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-03-13 00:59:48.605209 | orchestrator | Friday 13 March 2026 00:58:51 +0000 (0:00:01.653) 0:01:32.437 ********** 2026-03-13 00:59:48.605218 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-03-13 00:59:48.605222 | orchestrator | 2026-03-13 00:59:48.605227 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-03-13 00:59:48.605231 | orchestrator | Friday 13 March 2026 00:59:05 +0000 (0:00:13.710) 0:01:46.147 ********** 2026-03-13 00:59:48.605235 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-03-13 00:59:48.605240 | orchestrator | 2026-03-13 00:59:48.605245 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-03-13 00:59:48.605249 | orchestrator | Friday 13 March 2026 00:59:33 +0000 (0:00:28.235) 0:02:14.382 ********** 2026-03-13 00:59:48.605253 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-03-13 00:59:48.605258 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-03-13 00:59:48.605262 | orchestrator | 2026-03-13 00:59:48.605267 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-03-13 00:59:48.605273 | orchestrator | Friday 13 March 2026 00:59:41 +0000 (0:00:07.505) 0:02:21.887 ********** 2026-03-13 00:59:48.605279 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:59:48.605286 | orchestrator | 2026-03-13 00:59:48.605293 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-03-13 00:59:48.605302 | orchestrator | Friday 13 March 2026 00:59:41 +0000 (0:00:00.139) 0:02:22.027 ********** 2026-03-13 00:59:48.605309 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:59:48.605315 | orchestrator | 2026-03-13 00:59:48.605321 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-03-13 00:59:48.605328 | orchestrator | Friday 13 March 2026 00:59:41 +0000 (0:00:00.134) 0:02:22.161 ********** 2026-03-13 00:59:48.605334 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:59:48.605341 | orchestrator | 2026-03-13 00:59:48.605347 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-03-13 00:59:48.605354 | orchestrator | Friday 13 March 2026 00:59:41 +0000 (0:00:00.150) 0:02:22.312 ********** 2026-03-13 00:59:48.605360 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:59:48.605413 | orchestrator | 2026-03-13 00:59:48.605428 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-03-13 00:59:48.605434 | orchestrator | Friday 13 March 2026 00:59:42 +0000 (0:00:00.483) 0:02:22.796 ********** 2026-03-13 00:59:48.605440 | orchestrator | ok: [testbed-node-0] 2026-03-13 00:59:48.605447 | orchestrator | 2026-03-13 00:59:48.605505 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-13 00:59:48.605520 | orchestrator | Friday 13 March 2026 00:59:45 +0000 (0:00:02.974) 0:02:25.770 ********** 2026-03-13 00:59:48.605527 | orchestrator | skipping: [testbed-node-0] 2026-03-13 00:59:48.605534 | orchestrator | skipping: [testbed-node-1] 2026-03-13 00:59:48.605540 | orchestrator | skipping: [testbed-node-2] 2026-03-13 00:59:48.605546 | orchestrator | 2026-03-13 00:59:48.605552 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 00:59:48.605564 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-13 00:59:48.605572 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-13 00:59:48.605579 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-13 00:59:48.605585 | orchestrator | 2026-03-13 00:59:48.605592 | orchestrator | 2026-03-13 00:59:48.605598 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 00:59:48.605604 | orchestrator | Friday 13 March 2026 00:59:45 +0000 (0:00:00.425) 0:02:26.195 ********** 2026-03-13 00:59:48.605610 | orchestrator | =============================================================================== 2026-03-13 00:59:48.605617 | orchestrator | service-ks-register : keystone | Creating services --------------------- 28.24s 2026-03-13 00:59:48.605623 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 13.71s 2026-03-13 00:59:48.605629 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 12.88s 2026-03-13 00:59:48.605635 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 10.12s 2026-03-13 00:59:48.605642 | orchestrator | keystone : Restart keystone-ssh container ------------------------------- 8.83s 2026-03-13 00:59:48.605649 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.30s 2026-03-13 00:59:48.605655 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.51s 2026-03-13 00:59:48.605661 | orchestrator | keystone : Restart keystone container ----------------------------------- 7.14s 2026-03-13 00:59:48.605669 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.32s 2026-03-13 00:59:48.605673 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 4.29s 2026-03-13 00:59:48.605676 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.88s 2026-03-13 00:59:48.605680 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.23s 2026-03-13 00:59:48.605684 | orchestrator | keystone : Creating default user role ----------------------------------- 2.97s 2026-03-13 00:59:48.605688 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.59s 2026-03-13 00:59:48.605692 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.09s 2026-03-13 00:59:48.605696 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.02s 2026-03-13 00:59:48.605699 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.01s 2026-03-13 00:59:48.605711 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.01s 2026-03-13 00:59:48.605716 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.65s 2026-03-13 00:59:48.605720 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 1.58s 2026-03-13 00:59:48.605724 | orchestrator | 2026-03-13 00:59:48 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 00:59:48.605728 | orchestrator | 2026-03-13 00:59:48 | INFO  | Task 5f72507f-a032-423b-866e-5cb7c0bce64d is in state STARTED 2026-03-13 00:59:48.605732 | orchestrator | 2026-03-13 00:59:48 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:59:51.652545 | orchestrator | 2026-03-13 00:59:51 | INFO  | Task f86e2e9e-0398-4f02-b2b4-c444328aca9a is in state STARTED 2026-03-13 00:59:51.653035 | orchestrator | 2026-03-13 00:59:51 | INFO  | Task d2a1643a-cae4-4022-8eec-ac1ee46ee703 is in state STARTED 2026-03-13 00:59:51.654099 | orchestrator | 2026-03-13 00:59:51 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 00:59:51.655047 | orchestrator | 2026-03-13 00:59:51 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 00:59:51.656054 | orchestrator | 2026-03-13 00:59:51 | INFO  | Task 5f72507f-a032-423b-866e-5cb7c0bce64d is in state STARTED 2026-03-13 00:59:51.656086 | orchestrator | 2026-03-13 00:59:51 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:59:54.680896 | orchestrator | 2026-03-13 00:59:54 | INFO  | Task f86e2e9e-0398-4f02-b2b4-c444328aca9a is in state STARTED 2026-03-13 00:59:54.684183 | orchestrator | 2026-03-13 00:59:54 | INFO  | Task d2a1643a-cae4-4022-8eec-ac1ee46ee703 is in state STARTED 2026-03-13 00:59:54.686000 | orchestrator | 2026-03-13 00:59:54 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 00:59:54.686678 | orchestrator | 2026-03-13 00:59:54 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 00:59:54.687442 | orchestrator | 2026-03-13 00:59:54 | INFO  | Task 5f72507f-a032-423b-866e-5cb7c0bce64d is in state STARTED 2026-03-13 00:59:54.688519 | orchestrator | 2026-03-13 00:59:54 | INFO  | Wait 1 second(s) until the next check 2026-03-13 00:59:57.726812 | orchestrator | 2026-03-13 00:59:57 | INFO  | Task f86e2e9e-0398-4f02-b2b4-c444328aca9a is in state STARTED 2026-03-13 00:59:57.726958 | orchestrator | 2026-03-13 00:59:57 | INFO  | Task d2a1643a-cae4-4022-8eec-ac1ee46ee703 is in state STARTED 2026-03-13 00:59:57.727817 | orchestrator | 2026-03-13 00:59:57 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 00:59:57.728693 | orchestrator | 2026-03-13 00:59:57 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 00:59:57.729314 | orchestrator | 2026-03-13 00:59:57 | INFO  | Task 5f72507f-a032-423b-866e-5cb7c0bce64d is in state STARTED 2026-03-13 00:59:57.729410 | orchestrator | 2026-03-13 00:59:57 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:00:00.776626 | orchestrator | 2026-03-13 01:00:00 | INFO  | Task f86e2e9e-0398-4f02-b2b4-c444328aca9a is in state STARTED 2026-03-13 01:00:00.778150 | orchestrator | 2026-03-13 01:00:00 | INFO  | Task d2a1643a-cae4-4022-8eec-ac1ee46ee703 is in state STARTED 2026-03-13 01:00:00.780055 | orchestrator | 2026-03-13 01:00:00 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:00:00.781498 | orchestrator | 2026-03-13 01:00:00 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:00:00.782736 | orchestrator | 2026-03-13 01:00:00 | INFO  | Task 5f72507f-a032-423b-866e-5cb7c0bce64d is in state STARTED 2026-03-13 01:00:00.782772 | orchestrator | 2026-03-13 01:00:00 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:00:03.825050 | orchestrator | 2026-03-13 01:00:03 | INFO  | Task f86e2e9e-0398-4f02-b2b4-c444328aca9a is in state STARTED 2026-03-13 01:00:03.827140 | orchestrator | 2026-03-13 01:00:03 | INFO  | Task d2a1643a-cae4-4022-8eec-ac1ee46ee703 is in state STARTED 2026-03-13 01:00:03.829311 | orchestrator | 2026-03-13 01:00:03 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:00:03.831495 | orchestrator | 2026-03-13 01:00:03 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:00:03.834448 | orchestrator | 2026-03-13 01:00:03 | INFO  | Task 5f72507f-a032-423b-866e-5cb7c0bce64d is in state STARTED 2026-03-13 01:00:03.834554 | orchestrator | 2026-03-13 01:00:03 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:00:06.881308 | orchestrator | 2026-03-13 01:00:06 | INFO  | Task f86e2e9e-0398-4f02-b2b4-c444328aca9a is in state STARTED 2026-03-13 01:00:06.882997 | orchestrator | 2026-03-13 01:00:06 | INFO  | Task d2a1643a-cae4-4022-8eec-ac1ee46ee703 is in state STARTED 2026-03-13 01:00:06.884901 | orchestrator | 2026-03-13 01:00:06 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:00:06.886295 | orchestrator | 2026-03-13 01:00:06 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:00:06.888244 | orchestrator | 2026-03-13 01:00:06 | INFO  | Task 5f72507f-a032-423b-866e-5cb7c0bce64d is in state STARTED 2026-03-13 01:00:06.888291 | orchestrator | 2026-03-13 01:00:06 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:00:09.929889 | orchestrator | 2026-03-13 01:00:09 | INFO  | Task f86e2e9e-0398-4f02-b2b4-c444328aca9a is in state STARTED 2026-03-13 01:00:09.933511 | orchestrator | 2026-03-13 01:00:09 | INFO  | Task d2a1643a-cae4-4022-8eec-ac1ee46ee703 is in state STARTED 2026-03-13 01:00:09.937429 | orchestrator | 2026-03-13 01:00:09 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:00:09.939675 | orchestrator | 2026-03-13 01:00:09 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:00:09.941583 | orchestrator | 2026-03-13 01:00:09 | INFO  | Task 5f72507f-a032-423b-866e-5cb7c0bce64d is in state STARTED 2026-03-13 01:00:09.941614 | orchestrator | 2026-03-13 01:00:09 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:00:12.981529 | orchestrator | 2026-03-13 01:00:12 | INFO  | Task f86e2e9e-0398-4f02-b2b4-c444328aca9a is in state STARTED 2026-03-13 01:00:12.984999 | orchestrator | 2026-03-13 01:00:12 | INFO  | Task d2a1643a-cae4-4022-8eec-ac1ee46ee703 is in state STARTED 2026-03-13 01:00:12.987389 | orchestrator | 2026-03-13 01:00:12 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:00:12.989565 | orchestrator | 2026-03-13 01:00:12 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:00:12.991215 | orchestrator | 2026-03-13 01:00:12 | INFO  | Task 5f72507f-a032-423b-866e-5cb7c0bce64d is in state STARTED 2026-03-13 01:00:12.991360 | orchestrator | 2026-03-13 01:00:12 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:00:16.034089 | orchestrator | 2026-03-13 01:00:16 | INFO  | Task f86e2e9e-0398-4f02-b2b4-c444328aca9a is in state STARTED 2026-03-13 01:00:16.036399 | orchestrator | 2026-03-13 01:00:16 | INFO  | Task d2a1643a-cae4-4022-8eec-ac1ee46ee703 is in state STARTED 2026-03-13 01:00:16.039109 | orchestrator | 2026-03-13 01:00:16 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:00:16.040536 | orchestrator | 2026-03-13 01:00:16 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:00:16.041914 | orchestrator | 2026-03-13 01:00:16 | INFO  | Task 5f72507f-a032-423b-866e-5cb7c0bce64d is in state STARTED 2026-03-13 01:00:16.041957 | orchestrator | 2026-03-13 01:00:16 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:00:19.089443 | orchestrator | 2026-03-13 01:00:19 | INFO  | Task f86e2e9e-0398-4f02-b2b4-c444328aca9a is in state STARTED 2026-03-13 01:00:19.091435 | orchestrator | 2026-03-13 01:00:19 | INFO  | Task d2a1643a-cae4-4022-8eec-ac1ee46ee703 is in state STARTED 2026-03-13 01:00:19.091937 | orchestrator | 2026-03-13 01:00:19 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:00:19.094645 | orchestrator | 2026-03-13 01:00:19 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:00:19.095786 | orchestrator | 2026-03-13 01:00:19 | INFO  | Task 5f72507f-a032-423b-866e-5cb7c0bce64d is in state STARTED 2026-03-13 01:00:19.095819 | orchestrator | 2026-03-13 01:00:19 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:00:22.150641 | orchestrator | 2026-03-13 01:00:22 | INFO  | Task f86e2e9e-0398-4f02-b2b4-c444328aca9a is in state STARTED 2026-03-13 01:00:22.154264 | orchestrator | 2026-03-13 01:00:22 | INFO  | Task d2a1643a-cae4-4022-8eec-ac1ee46ee703 is in state STARTED 2026-03-13 01:00:22.157793 | orchestrator | 2026-03-13 01:00:22 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:00:22.161049 | orchestrator | 2026-03-13 01:00:22 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:00:22.167220 | orchestrator | 2026-03-13 01:00:22 | INFO  | Task 5f72507f-a032-423b-866e-5cb7c0bce64d is in state STARTED 2026-03-13 01:00:22.167274 | orchestrator | 2026-03-13 01:00:22 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:00:25.200392 | orchestrator | 2026-03-13 01:00:25 | INFO  | Task f86e2e9e-0398-4f02-b2b4-c444328aca9a is in state STARTED 2026-03-13 01:00:25.200776 | orchestrator | 2026-03-13 01:00:25 | INFO  | Task d2a1643a-cae4-4022-8eec-ac1ee46ee703 is in state STARTED 2026-03-13 01:00:25.201933 | orchestrator | 2026-03-13 01:00:25 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:00:25.203606 | orchestrator | 2026-03-13 01:00:25 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:00:25.205164 | orchestrator | 2026-03-13 01:00:25 | INFO  | Task 5f72507f-a032-423b-866e-5cb7c0bce64d is in state SUCCESS 2026-03-13 01:00:25.206299 | orchestrator | 2026-03-13 01:00:25 | INFO  | Task 3eb77512-c525-4ce2-94ab-461caeaa5be9 is in state STARTED 2026-03-13 01:00:25.206336 | orchestrator | 2026-03-13 01:00:25 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:00:28.244494 | orchestrator | 2026-03-13 01:00:28 | INFO  | Task f86e2e9e-0398-4f02-b2b4-c444328aca9a is in state STARTED 2026-03-13 01:00:28.247416 | orchestrator | 2026-03-13 01:00:28 | INFO  | Task d2a1643a-cae4-4022-8eec-ac1ee46ee703 is in state STARTED 2026-03-13 01:00:28.248980 | orchestrator | 2026-03-13 01:00:28 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:00:28.250446 | orchestrator | 2026-03-13 01:00:28 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:00:28.251972 | orchestrator | 2026-03-13 01:00:28 | INFO  | Task 3eb77512-c525-4ce2-94ab-461caeaa5be9 is in state STARTED 2026-03-13 01:00:28.252133 | orchestrator | 2026-03-13 01:00:28 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:00:31.291209 | orchestrator | 2026-03-13 01:00:31 | INFO  | Task f86e2e9e-0398-4f02-b2b4-c444328aca9a is in state STARTED 2026-03-13 01:00:31.291749 | orchestrator | 2026-03-13 01:00:31 | INFO  | Task d2a1643a-cae4-4022-8eec-ac1ee46ee703 is in state STARTED 2026-03-13 01:00:31.292643 | orchestrator | 2026-03-13 01:00:31 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:00:31.294785 | orchestrator | 2026-03-13 01:00:31 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:00:31.295784 | orchestrator | 2026-03-13 01:00:31 | INFO  | Task 3eb77512-c525-4ce2-94ab-461caeaa5be9 is in state STARTED 2026-03-13 01:00:31.295815 | orchestrator | 2026-03-13 01:00:31 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:00:34.336748 | orchestrator | 2026-03-13 01:00:34 | INFO  | Task f86e2e9e-0398-4f02-b2b4-c444328aca9a is in state STARTED 2026-03-13 01:00:34.341007 | orchestrator | 2026-03-13 01:00:34 | INFO  | Task d2a1643a-cae4-4022-8eec-ac1ee46ee703 is in state STARTED 2026-03-13 01:00:34.341057 | orchestrator | 2026-03-13 01:00:34 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:00:34.341063 | orchestrator | 2026-03-13 01:00:34 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:00:34.342907 | orchestrator | 2026-03-13 01:00:34 | INFO  | Task 3eb77512-c525-4ce2-94ab-461caeaa5be9 is in state STARTED 2026-03-13 01:00:34.342952 | orchestrator | 2026-03-13 01:00:34 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:00:37.414958 | orchestrator | 2026-03-13 01:00:37 | INFO  | Task f86e2e9e-0398-4f02-b2b4-c444328aca9a is in state STARTED 2026-03-13 01:00:37.415011 | orchestrator | 2026-03-13 01:00:37 | INFO  | Task d2a1643a-cae4-4022-8eec-ac1ee46ee703 is in state STARTED 2026-03-13 01:00:37.415019 | orchestrator | 2026-03-13 01:00:37 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:00:37.415024 | orchestrator | 2026-03-13 01:00:37 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:00:37.415028 | orchestrator | 2026-03-13 01:00:37 | INFO  | Task 3eb77512-c525-4ce2-94ab-461caeaa5be9 is in state STARTED 2026-03-13 01:00:37.415033 | orchestrator | 2026-03-13 01:00:37 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:00:40.441827 | orchestrator | 2026-03-13 01:00:40 | INFO  | Task f86e2e9e-0398-4f02-b2b4-c444328aca9a is in state STARTED 2026-03-13 01:00:40.442567 | orchestrator | 2026-03-13 01:00:40 | INFO  | Task d2a1643a-cae4-4022-8eec-ac1ee46ee703 is in state STARTED 2026-03-13 01:00:40.444092 | orchestrator | 2026-03-13 01:00:40 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:00:40.445219 | orchestrator | 2026-03-13 01:00:40 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:00:40.446883 | orchestrator | 2026-03-13 01:00:40 | INFO  | Task 3eb77512-c525-4ce2-94ab-461caeaa5be9 is in state STARTED 2026-03-13 01:00:40.446912 | orchestrator | 2026-03-13 01:00:40 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:00:43.470992 | orchestrator | 2026-03-13 01:00:43 | INFO  | Task f86e2e9e-0398-4f02-b2b4-c444328aca9a is in state STARTED 2026-03-13 01:00:43.471136 | orchestrator | 2026-03-13 01:00:43 | INFO  | Task d2a1643a-cae4-4022-8eec-ac1ee46ee703 is in state STARTED 2026-03-13 01:00:43.473229 | orchestrator | 2026-03-13 01:00:43 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:00:43.473821 | orchestrator | 2026-03-13 01:00:43 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:00:43.474473 | orchestrator | 2026-03-13 01:00:43 | INFO  | Task 3eb77512-c525-4ce2-94ab-461caeaa5be9 is in state STARTED 2026-03-13 01:00:43.474583 | orchestrator | 2026-03-13 01:00:43 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:00:46.494061 | orchestrator | 2026-03-13 01:00:46 | INFO  | Task f86e2e9e-0398-4f02-b2b4-c444328aca9a is in state STARTED 2026-03-13 01:00:46.494439 | orchestrator | 2026-03-13 01:00:46 | INFO  | Task d2a1643a-cae4-4022-8eec-ac1ee46ee703 is in state STARTED 2026-03-13 01:00:46.495119 | orchestrator | 2026-03-13 01:00:46 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:00:46.495651 | orchestrator | 2026-03-13 01:00:46 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:00:46.496404 | orchestrator | 2026-03-13 01:00:46 | INFO  | Task 3eb77512-c525-4ce2-94ab-461caeaa5be9 is in state STARTED 2026-03-13 01:00:46.496427 | orchestrator | 2026-03-13 01:00:46 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:00:49.519905 | orchestrator | 2026-03-13 01:00:49 | INFO  | Task f86e2e9e-0398-4f02-b2b4-c444328aca9a is in state STARTED 2026-03-13 01:00:49.520309 | orchestrator | 2026-03-13 01:00:49 | INFO  | Task d2a1643a-cae4-4022-8eec-ac1ee46ee703 is in state STARTED 2026-03-13 01:00:49.520970 | orchestrator | 2026-03-13 01:00:49 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:00:49.521804 | orchestrator | 2026-03-13 01:00:49 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:00:49.522309 | orchestrator | 2026-03-13 01:00:49 | INFO  | Task 3eb77512-c525-4ce2-94ab-461caeaa5be9 is in state STARTED 2026-03-13 01:00:49.522514 | orchestrator | 2026-03-13 01:00:49 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:00:52.540233 | orchestrator | 2026-03-13 01:00:52 | INFO  | Task f86e2e9e-0398-4f02-b2b4-c444328aca9a is in state STARTED 2026-03-13 01:00:52.540716 | orchestrator | 2026-03-13 01:00:52 | INFO  | Task d2a1643a-cae4-4022-8eec-ac1ee46ee703 is in state STARTED 2026-03-13 01:00:52.541500 | orchestrator | 2026-03-13 01:00:52 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:00:52.542062 | orchestrator | 2026-03-13 01:00:52 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:00:52.542724 | orchestrator | 2026-03-13 01:00:52 | INFO  | Task 3eb77512-c525-4ce2-94ab-461caeaa5be9 is in state STARTED 2026-03-13 01:00:52.542866 | orchestrator | 2026-03-13 01:00:52 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:00:55.562960 | orchestrator | 2026-03-13 01:00:55 | INFO  | Task f86e2e9e-0398-4f02-b2b4-c444328aca9a is in state STARTED 2026-03-13 01:00:55.563334 | orchestrator | 2026-03-13 01:00:55 | INFO  | Task d2a1643a-cae4-4022-8eec-ac1ee46ee703 is in state STARTED 2026-03-13 01:00:55.564182 | orchestrator | 2026-03-13 01:00:55 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:00:55.564768 | orchestrator | 2026-03-13 01:00:55 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:00:55.566333 | orchestrator | 2026-03-13 01:00:55 | INFO  | Task 3eb77512-c525-4ce2-94ab-461caeaa5be9 is in state STARTED 2026-03-13 01:00:55.566359 | orchestrator | 2026-03-13 01:00:55 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:00:58.585172 | orchestrator | 2026-03-13 01:00:58 | INFO  | Task f86e2e9e-0398-4f02-b2b4-c444328aca9a is in state STARTED 2026-03-13 01:00:58.585628 | orchestrator | 2026-03-13 01:00:58 | INFO  | Task d2a1643a-cae4-4022-8eec-ac1ee46ee703 is in state STARTED 2026-03-13 01:00:58.586183 | orchestrator | 2026-03-13 01:00:58 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:00:58.586930 | orchestrator | 2026-03-13 01:00:58 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:00:58.587562 | orchestrator | 2026-03-13 01:00:58 | INFO  | Task 3eb77512-c525-4ce2-94ab-461caeaa5be9 is in state STARTED 2026-03-13 01:00:58.587596 | orchestrator | 2026-03-13 01:00:58 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:01:01.611595 | orchestrator | 2026-03-13 01:01:01 | INFO  | Task f86e2e9e-0398-4f02-b2b4-c444328aca9a is in state STARTED 2026-03-13 01:01:01.612081 | orchestrator | 2026-03-13 01:01:01 | INFO  | Task d2a1643a-cae4-4022-8eec-ac1ee46ee703 is in state STARTED 2026-03-13 01:01:01.612539 | orchestrator | 2026-03-13 01:01:01 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:01:01.614572 | orchestrator | 2026-03-13 01:01:01 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:01:01.615228 | orchestrator | 2026-03-13 01:01:01 | INFO  | Task 3eb77512-c525-4ce2-94ab-461caeaa5be9 is in state STARTED 2026-03-13 01:01:01.615293 | orchestrator | 2026-03-13 01:01:01 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:01:04.639803 | orchestrator | 2026-03-13 01:01:04 | INFO  | Task f86e2e9e-0398-4f02-b2b4-c444328aca9a is in state STARTED 2026-03-13 01:01:04.641664 | orchestrator | 2026-03-13 01:01:04 | INFO  | Task d2a1643a-cae4-4022-8eec-ac1ee46ee703 is in state STARTED 2026-03-13 01:01:04.642833 | orchestrator | 2026-03-13 01:01:04 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:01:04.644419 | orchestrator | 2026-03-13 01:01:04 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:01:04.645460 | orchestrator | 2026-03-13 01:01:04 | INFO  | Task 3eb77512-c525-4ce2-94ab-461caeaa5be9 is in state STARTED 2026-03-13 01:01:04.645499 | orchestrator | 2026-03-13 01:01:04 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:01:07.740682 | orchestrator | 2026-03-13 01:01:07 | INFO  | Task f86e2e9e-0398-4f02-b2b4-c444328aca9a is in state STARTED 2026-03-13 01:01:07.741015 | orchestrator | 2026-03-13 01:01:07 | INFO  | Task d2a1643a-cae4-4022-8eec-ac1ee46ee703 is in state STARTED 2026-03-13 01:01:07.742159 | orchestrator | 2026-03-13 01:01:07 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:01:07.742820 | orchestrator | 2026-03-13 01:01:07 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:01:07.744100 | orchestrator | 2026-03-13 01:01:07 | INFO  | Task 3eb77512-c525-4ce2-94ab-461caeaa5be9 is in state STARTED 2026-03-13 01:01:07.744122 | orchestrator | 2026-03-13 01:01:07 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:01:10.775756 | orchestrator | 2026-03-13 01:01:10 | INFO  | Task f86e2e9e-0398-4f02-b2b4-c444328aca9a is in state STARTED 2026-03-13 01:01:10.776222 | orchestrator | 2026-03-13 01:01:10 | INFO  | Task d2a1643a-cae4-4022-8eec-ac1ee46ee703 is in state STARTED 2026-03-13 01:01:10.776806 | orchestrator | 2026-03-13 01:01:10 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:01:10.777503 | orchestrator | 2026-03-13 01:01:10 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:01:10.778099 | orchestrator | 2026-03-13 01:01:10 | INFO  | Task 3eb77512-c525-4ce2-94ab-461caeaa5be9 is in state STARTED 2026-03-13 01:01:10.778124 | orchestrator | 2026-03-13 01:01:10 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:01:13.798876 | orchestrator | 2026-03-13 01:01:13 | INFO  | Task f86e2e9e-0398-4f02-b2b4-c444328aca9a is in state STARTED 2026-03-13 01:01:13.800777 | orchestrator | 2026-03-13 01:01:13 | INFO  | Task d2a1643a-cae4-4022-8eec-ac1ee46ee703 is in state STARTED 2026-03-13 01:01:13.800823 | orchestrator | 2026-03-13 01:01:13 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:01:13.800832 | orchestrator | 2026-03-13 01:01:13 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:01:13.801409 | orchestrator | 2026-03-13 01:01:13 | INFO  | Task 3eb77512-c525-4ce2-94ab-461caeaa5be9 is in state STARTED 2026-03-13 01:01:13.801456 | orchestrator | 2026-03-13 01:01:13 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:01:16.829912 | orchestrator | 2026-03-13 01:01:16 | INFO  | Task f86e2e9e-0398-4f02-b2b4-c444328aca9a is in state STARTED 2026-03-13 01:01:16.830541 | orchestrator | 2026-03-13 01:01:16 | INFO  | Task d2a1643a-cae4-4022-8eec-ac1ee46ee703 is in state STARTED 2026-03-13 01:01:16.831418 | orchestrator | 2026-03-13 01:01:16 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:01:16.832411 | orchestrator | 2026-03-13 01:01:16 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:01:16.833360 | orchestrator | 2026-03-13 01:01:16 | INFO  | Task 3eb77512-c525-4ce2-94ab-461caeaa5be9 is in state STARTED 2026-03-13 01:01:16.833385 | orchestrator | 2026-03-13 01:01:16 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:01:19.861850 | orchestrator | 2026-03-13 01:01:19 | INFO  | Task f86e2e9e-0398-4f02-b2b4-c444328aca9a is in state STARTED 2026-03-13 01:01:19.862107 | orchestrator | 2026-03-13 01:01:19 | INFO  | Task d2a1643a-cae4-4022-8eec-ac1ee46ee703 is in state STARTED 2026-03-13 01:01:19.862877 | orchestrator | 2026-03-13 01:01:19 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:01:19.864364 | orchestrator | 2026-03-13 01:01:19 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:01:19.865025 | orchestrator | 2026-03-13 01:01:19 | INFO  | Task 3eb77512-c525-4ce2-94ab-461caeaa5be9 is in state STARTED 2026-03-13 01:01:19.865167 | orchestrator | 2026-03-13 01:01:19 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:01:22.893595 | orchestrator | 2026-03-13 01:01:22 | INFO  | Task f86e2e9e-0398-4f02-b2b4-c444328aca9a is in state STARTED 2026-03-13 01:01:22.894185 | orchestrator | 2026-03-13 01:01:22 | INFO  | Task d2a1643a-cae4-4022-8eec-ac1ee46ee703 is in state STARTED 2026-03-13 01:01:22.896292 | orchestrator | 2026-03-13 01:01:22 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:01:22.896875 | orchestrator | 2026-03-13 01:01:22 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:01:22.897514 | orchestrator | 2026-03-13 01:01:22 | INFO  | Task 3eb77512-c525-4ce2-94ab-461caeaa5be9 is in state STARTED 2026-03-13 01:01:22.897546 | orchestrator | 2026-03-13 01:01:22 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:01:25.932468 | orchestrator | 2026-03-13 01:01:25 | INFO  | Task f86e2e9e-0398-4f02-b2b4-c444328aca9a is in state STARTED 2026-03-13 01:01:25.933698 | orchestrator | 2026-03-13 01:01:25 | INFO  | Task d2a1643a-cae4-4022-8eec-ac1ee46ee703 is in state STARTED 2026-03-13 01:01:25.935428 | orchestrator | 2026-03-13 01:01:25 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:01:25.936794 | orchestrator | 2026-03-13 01:01:25 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:01:25.937479 | orchestrator | 2026-03-13 01:01:25 | INFO  | Task 3eb77512-c525-4ce2-94ab-461caeaa5be9 is in state STARTED 2026-03-13 01:01:25.937501 | orchestrator | 2026-03-13 01:01:25 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:01:28.973607 | orchestrator | 2026-03-13 01:01:28 | INFO  | Task f86e2e9e-0398-4f02-b2b4-c444328aca9a is in state STARTED 2026-03-13 01:01:28.976775 | orchestrator | 2026-03-13 01:01:28 | INFO  | Task d2a1643a-cae4-4022-8eec-ac1ee46ee703 is in state STARTED 2026-03-13 01:01:28.977432 | orchestrator | 2026-03-13 01:01:28 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:01:28.979233 | orchestrator | 2026-03-13 01:01:28 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:01:28.980468 | orchestrator | 2026-03-13 01:01:28 | INFO  | Task 3eb77512-c525-4ce2-94ab-461caeaa5be9 is in state STARTED 2026-03-13 01:01:28.980536 | orchestrator | 2026-03-13 01:01:28 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:01:32.026036 | orchestrator | 2026-03-13 01:01:32 | INFO  | Task f86e2e9e-0398-4f02-b2b4-c444328aca9a is in state STARTED 2026-03-13 01:01:32.026537 | orchestrator | 2026-03-13 01:01:32 | INFO  | Task d2a1643a-cae4-4022-8eec-ac1ee46ee703 is in state STARTED 2026-03-13 01:01:32.027265 | orchestrator | 2026-03-13 01:01:32 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:01:32.027815 | orchestrator | 2026-03-13 01:01:32 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:01:32.029129 | orchestrator | 2026-03-13 01:01:32 | INFO  | Task 3eb77512-c525-4ce2-94ab-461caeaa5be9 is in state STARTED 2026-03-13 01:01:32.029166 | orchestrator | 2026-03-13 01:01:32 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:01:35.055513 | orchestrator | 2026-03-13 01:01:35 | INFO  | Task f86e2e9e-0398-4f02-b2b4-c444328aca9a is in state STARTED 2026-03-13 01:01:35.055623 | orchestrator | 2026-03-13 01:01:35 | INFO  | Task d2a1643a-cae4-4022-8eec-ac1ee46ee703 is in state STARTED 2026-03-13 01:01:35.056389 | orchestrator | 2026-03-13 01:01:35 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:01:35.056942 | orchestrator | 2026-03-13 01:01:35 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:01:35.057693 | orchestrator | 2026-03-13 01:01:35 | INFO  | Task 3eb77512-c525-4ce2-94ab-461caeaa5be9 is in state STARTED 2026-03-13 01:01:35.057801 | orchestrator | 2026-03-13 01:01:35 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:01:38.081813 | orchestrator | 2026-03-13 01:01:38 | INFO  | Task f86e2e9e-0398-4f02-b2b4-c444328aca9a is in state STARTED 2026-03-13 01:01:38.083277 | orchestrator | 2026-03-13 01:01:38 | INFO  | Task d2a1643a-cae4-4022-8eec-ac1ee46ee703 is in state STARTED 2026-03-13 01:01:38.084127 | orchestrator | 2026-03-13 01:01:38 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:01:38.086394 | orchestrator | 2026-03-13 01:01:38 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:01:38.086676 | orchestrator | 2026-03-13 01:01:38 | INFO  | Task 3eb77512-c525-4ce2-94ab-461caeaa5be9 is in state STARTED 2026-03-13 01:01:38.086693 | orchestrator | 2026-03-13 01:01:38 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:01:41.106910 | orchestrator | 2026-03-13 01:01:41 | INFO  | Task f86e2e9e-0398-4f02-b2b4-c444328aca9a is in state STARTED 2026-03-13 01:01:41.107475 | orchestrator | 2026-03-13 01:01:41 | INFO  | Task d2a1643a-cae4-4022-8eec-ac1ee46ee703 is in state STARTED 2026-03-13 01:01:41.108691 | orchestrator | 2026-03-13 01:01:41 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:01:41.109260 | orchestrator | 2026-03-13 01:01:41 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:01:41.109925 | orchestrator | 2026-03-13 01:01:41 | INFO  | Task 3eb77512-c525-4ce2-94ab-461caeaa5be9 is in state STARTED 2026-03-13 01:01:41.109965 | orchestrator | 2026-03-13 01:01:41 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:01:44.133672 | orchestrator | 2026-03-13 01:01:44 | INFO  | Task f86e2e9e-0398-4f02-b2b4-c444328aca9a is in state STARTED 2026-03-13 01:01:44.134148 | orchestrator | 2026-03-13 01:01:44 | INFO  | Task d2a1643a-cae4-4022-8eec-ac1ee46ee703 is in state STARTED 2026-03-13 01:01:44.135022 | orchestrator | 2026-03-13 01:01:44 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:01:44.135607 | orchestrator | 2026-03-13 01:01:44 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:01:44.136149 | orchestrator | 2026-03-13 01:01:44 | INFO  | Task 3eb77512-c525-4ce2-94ab-461caeaa5be9 is in state STARTED 2026-03-13 01:01:44.136239 | orchestrator | 2026-03-13 01:01:44 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:01:47.155719 | orchestrator | 2026-03-13 01:01:47 | INFO  | Task f86e2e9e-0398-4f02-b2b4-c444328aca9a is in state STARTED 2026-03-13 01:01:47.155994 | orchestrator | 2026-03-13 01:01:47 | INFO  | Task d2a1643a-cae4-4022-8eec-ac1ee46ee703 is in state STARTED 2026-03-13 01:01:47.156998 | orchestrator | 2026-03-13 01:01:47 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:01:47.157567 | orchestrator | 2026-03-13 01:01:47 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:01:47.158318 | orchestrator | 2026-03-13 01:01:47 | INFO  | Task 3eb77512-c525-4ce2-94ab-461caeaa5be9 is in state STARTED 2026-03-13 01:01:47.158367 | orchestrator | 2026-03-13 01:01:47 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:01:50.181763 | orchestrator | 2026-03-13 01:01:50 | INFO  | Task f86e2e9e-0398-4f02-b2b4-c444328aca9a is in state STARTED 2026-03-13 01:01:50.184151 | orchestrator | 2026-03-13 01:01:50 | INFO  | Task d2a1643a-cae4-4022-8eec-ac1ee46ee703 is in state STARTED 2026-03-13 01:01:50.185396 | orchestrator | 2026-03-13 01:01:50 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:01:50.186247 | orchestrator | 2026-03-13 01:01:50 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:01:50.187023 | orchestrator | 2026-03-13 01:01:50 | INFO  | Task 3eb77512-c525-4ce2-94ab-461caeaa5be9 is in state STARTED 2026-03-13 01:01:50.187056 | orchestrator | 2026-03-13 01:01:50 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:01:53.209383 | orchestrator | 2026-03-13 01:01:53 | INFO  | Task f86e2e9e-0398-4f02-b2b4-c444328aca9a is in state STARTED 2026-03-13 01:01:53.210410 | orchestrator | 2026-03-13 01:01:53.210451 | orchestrator | 2026-03-13 01:01:53.210459 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-03-13 01:01:53.210465 | orchestrator | 2026-03-13 01:01:53.210470 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-03-13 01:01:53.210476 | orchestrator | Friday 13 March 2026 00:59:32 +0000 (0:00:00.202) 0:00:00.202 ********** 2026-03-13 01:01:53.210482 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-03-13 01:01:53.210488 | orchestrator | 2026-03-13 01:01:53.210493 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-03-13 01:01:53.210498 | orchestrator | Friday 13 March 2026 00:59:32 +0000 (0:00:00.192) 0:00:00.395 ********** 2026-03-13 01:01:53.210503 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-03-13 01:01:53.210509 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-03-13 01:01:53.210514 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-03-13 01:01:53.210520 | orchestrator | 2026-03-13 01:01:53.210525 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-03-13 01:01:53.210530 | orchestrator | Friday 13 March 2026 00:59:33 +0000 (0:00:01.221) 0:00:01.617 ********** 2026-03-13 01:01:53.210535 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-03-13 01:01:53.210541 | orchestrator | 2026-03-13 01:01:53.210546 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-03-13 01:01:53.210552 | orchestrator | Friday 13 March 2026 00:59:35 +0000 (0:00:01.386) 0:00:03.003 ********** 2026-03-13 01:01:53.210581 | orchestrator | changed: [testbed-manager] 2026-03-13 01:01:53.210593 | orchestrator | 2026-03-13 01:01:53.210601 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-03-13 01:01:53.210609 | orchestrator | Friday 13 March 2026 00:59:36 +0000 (0:00:00.856) 0:00:03.859 ********** 2026-03-13 01:01:53.210616 | orchestrator | changed: [testbed-manager] 2026-03-13 01:01:53.210624 | orchestrator | 2026-03-13 01:01:53.210632 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-03-13 01:01:53.210640 | orchestrator | Friday 13 March 2026 00:59:36 +0000 (0:00:00.837) 0:00:04.697 ********** 2026-03-13 01:01:53.210648 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-03-13 01:01:53.210656 | orchestrator | ok: [testbed-manager] 2026-03-13 01:01:53.210665 | orchestrator | 2026-03-13 01:01:53.210674 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-03-13 01:01:53.210683 | orchestrator | Friday 13 March 2026 01:00:12 +0000 (0:00:35.529) 0:00:40.227 ********** 2026-03-13 01:01:53.210693 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-03-13 01:01:53.210698 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-03-13 01:01:53.210711 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-03-13 01:01:53.210716 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-03-13 01:01:53.210721 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-03-13 01:01:53.210726 | orchestrator | 2026-03-13 01:01:53.210732 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-03-13 01:01:53.210737 | orchestrator | Friday 13 March 2026 01:00:16 +0000 (0:00:04.013) 0:00:44.241 ********** 2026-03-13 01:01:53.210742 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-03-13 01:01:53.210747 | orchestrator | 2026-03-13 01:01:53.210752 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-03-13 01:01:53.210757 | orchestrator | Friday 13 March 2026 01:00:16 +0000 (0:00:00.506) 0:00:44.747 ********** 2026-03-13 01:01:53.210762 | orchestrator | skipping: [testbed-manager] 2026-03-13 01:01:53.210767 | orchestrator | 2026-03-13 01:01:53.210772 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-03-13 01:01:53.210777 | orchestrator | Friday 13 March 2026 01:00:17 +0000 (0:00:00.129) 0:00:44.877 ********** 2026-03-13 01:01:53.210782 | orchestrator | skipping: [testbed-manager] 2026-03-13 01:01:53.210787 | orchestrator | 2026-03-13 01:01:53.210792 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-03-13 01:01:53.210797 | orchestrator | Friday 13 March 2026 01:00:17 +0000 (0:00:00.511) 0:00:45.389 ********** 2026-03-13 01:01:53.210802 | orchestrator | changed: [testbed-manager] 2026-03-13 01:01:53.210807 | orchestrator | 2026-03-13 01:01:53.210812 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-03-13 01:01:53.210817 | orchestrator | Friday 13 March 2026 01:00:19 +0000 (0:00:01.587) 0:00:46.977 ********** 2026-03-13 01:01:53.210823 | orchestrator | changed: [testbed-manager] 2026-03-13 01:01:53.210827 | orchestrator | 2026-03-13 01:01:53.210833 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-03-13 01:01:53.210838 | orchestrator | Friday 13 March 2026 01:00:19 +0000 (0:00:00.813) 0:00:47.790 ********** 2026-03-13 01:01:53.210843 | orchestrator | changed: [testbed-manager] 2026-03-13 01:01:53.210848 | orchestrator | 2026-03-13 01:01:53.210853 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-03-13 01:01:53.210858 | orchestrator | Friday 13 March 2026 01:00:20 +0000 (0:00:00.610) 0:00:48.401 ********** 2026-03-13 01:01:53.210864 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-03-13 01:01:53.210869 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-03-13 01:01:53.210874 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-03-13 01:01:53.210879 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-03-13 01:01:53.210884 | orchestrator | 2026-03-13 01:01:53.210889 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 01:01:53.210899 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-13 01:01:53.210905 | orchestrator | 2026-03-13 01:01:53.210910 | orchestrator | 2026-03-13 01:01:53.211024 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 01:01:53.211032 | orchestrator | Friday 13 March 2026 01:00:22 +0000 (0:00:01.538) 0:00:49.940 ********** 2026-03-13 01:01:53.211038 | orchestrator | =============================================================================== 2026-03-13 01:01:53.211044 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 35.53s 2026-03-13 01:01:53.211050 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.01s 2026-03-13 01:01:53.211057 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.59s 2026-03-13 01:01:53.211062 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.54s 2026-03-13 01:01:53.211068 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.39s 2026-03-13 01:01:53.211074 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.22s 2026-03-13 01:01:53.211080 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.86s 2026-03-13 01:01:53.211086 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.84s 2026-03-13 01:01:53.211092 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.81s 2026-03-13 01:01:53.211099 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.61s 2026-03-13 01:01:53.211105 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.51s 2026-03-13 01:01:53.211110 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.51s 2026-03-13 01:01:53.211116 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.19s 2026-03-13 01:01:53.211125 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.13s 2026-03-13 01:01:53.211134 | orchestrator | 2026-03-13 01:01:53.211148 | orchestrator | 2026-03-13 01:01:53 | INFO  | Task d2a1643a-cae4-4022-8eec-ac1ee46ee703 is in state SUCCESS 2026-03-13 01:01:53.212569 | orchestrator | 2026-03-13 01:01:53.212616 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-13 01:01:53.212626 | orchestrator | 2026-03-13 01:01:53.212634 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-13 01:01:53.212642 | orchestrator | Friday 13 March 2026 00:59:52 +0000 (0:00:00.243) 0:00:00.243 ********** 2026-03-13 01:01:53.212650 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:01:53.212658 | orchestrator | ok: [testbed-node-1] 2026-03-13 01:01:53.212666 | orchestrator | ok: [testbed-node-2] 2026-03-13 01:01:53.212674 | orchestrator | 2026-03-13 01:01:53.212681 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-13 01:01:53.212689 | orchestrator | Friday 13 March 2026 00:59:52 +0000 (0:00:00.452) 0:00:00.696 ********** 2026-03-13 01:01:53.212697 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-03-13 01:01:53.212705 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-03-13 01:01:53.212727 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-03-13 01:01:53.212735 | orchestrator | 2026-03-13 01:01:53.212743 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-03-13 01:01:53.212751 | orchestrator | 2026-03-13 01:01:53.212765 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-13 01:01:53.212779 | orchestrator | Friday 13 March 2026 00:59:53 +0000 (0:00:00.440) 0:00:01.136 ********** 2026-03-13 01:01:53.212793 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 01:01:53.212807 | orchestrator | 2026-03-13 01:01:53.212821 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-03-13 01:01:53.212851 | orchestrator | Friday 13 March 2026 00:59:53 +0000 (0:00:00.450) 0:00:01.587 ********** 2026-03-13 01:01:53.212860 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-03-13 01:01:53.212868 | orchestrator | 2026-03-13 01:01:53.212876 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-03-13 01:01:53.212883 | orchestrator | Friday 13 March 2026 00:59:58 +0000 (0:00:04.753) 0:00:06.340 ********** 2026-03-13 01:01:53.212891 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-03-13 01:01:53.212899 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-03-13 01:01:53.212907 | orchestrator | 2026-03-13 01:01:53.212915 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-03-13 01:01:53.212922 | orchestrator | Friday 13 March 2026 01:00:04 +0000 (0:00:06.705) 0:00:13.046 ********** 2026-03-13 01:01:53.212930 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-13 01:01:53.212938 | orchestrator | 2026-03-13 01:01:53.212945 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-03-13 01:01:53.212953 | orchestrator | Friday 13 March 2026 01:00:08 +0000 (0:00:03.316) 0:00:16.362 ********** 2026-03-13 01:01:53.212961 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-03-13 01:01:53.212969 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-13 01:01:53.212976 | orchestrator | 2026-03-13 01:01:53.212984 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-03-13 01:01:53.212992 | orchestrator | Friday 13 March 2026 01:00:12 +0000 (0:00:03.734) 0:00:20.096 ********** 2026-03-13 01:01:53.212999 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-13 01:01:53.213007 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-03-13 01:01:53.213015 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-03-13 01:01:53.213023 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-03-13 01:01:53.213030 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-03-13 01:01:53.213038 | orchestrator | 2026-03-13 01:01:53.213046 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-03-13 01:01:53.213053 | orchestrator | Friday 13 March 2026 01:00:27 +0000 (0:00:15.691) 0:00:35.788 ********** 2026-03-13 01:01:53.213061 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-03-13 01:01:53.213069 | orchestrator | 2026-03-13 01:01:53.213076 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-03-13 01:01:53.213084 | orchestrator | Friday 13 March 2026 01:00:31 +0000 (0:00:03.536) 0:00:39.324 ********** 2026-03-13 01:01:53.213095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-13 01:01:53.213121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-13 01:01:53.213136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-13 01:01:53.213146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-13 01:01:53.213156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-13 01:01:53.213189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-13 01:01:53.213208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-13 01:01:53.213226 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-13 01:01:53.213235 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-13 01:01:53.213244 | orchestrator | 2026-03-13 01:01:53.213252 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-03-13 01:01:53.213260 | orchestrator | Friday 13 March 2026 01:00:33 +0000 (0:00:02.226) 0:00:41.551 ********** 2026-03-13 01:01:53.213267 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-03-13 01:01:53.213275 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-03-13 01:01:53.213283 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-03-13 01:01:53.213290 | orchestrator | 2026-03-13 01:01:53.213298 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-03-13 01:01:53.213306 | orchestrator | Friday 13 March 2026 01:00:35 +0000 (0:00:01.686) 0:00:43.237 ********** 2026-03-13 01:01:53.213314 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:01:53.213328 | orchestrator | 2026-03-13 01:01:53.213340 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-03-13 01:01:53.213354 | orchestrator | Friday 13 March 2026 01:00:35 +0000 (0:00:00.236) 0:00:43.474 ********** 2026-03-13 01:01:53.213367 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:01:53.213381 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:01:53.213395 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:01:53.213406 | orchestrator | 2026-03-13 01:01:53.213414 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-13 01:01:53.213421 | orchestrator | Friday 13 March 2026 01:00:35 +0000 (0:00:00.456) 0:00:43.931 ********** 2026-03-13 01:01:53.213429 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 01:01:53.213436 | orchestrator | 2026-03-13 01:01:53.213444 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-03-13 01:01:53.213452 | orchestrator | Friday 13 March 2026 01:00:36 +0000 (0:00:00.485) 0:00:44.417 ********** 2026-03-13 01:01:53.213460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-13 01:01:53.213480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-13 01:01:53.213493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-13 01:01:53.213501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-13 01:01:53.213513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-13 01:01:53.213528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-13 01:01:53.213549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-13 01:01:53.213568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-13 01:01:53.213581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-13 01:01:53.213589 | orchestrator | 2026-03-13 01:01:53.213597 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-03-13 01:01:53.213605 | orchestrator | Friday 13 March 2026 01:00:39 +0000 (0:00:03.168) 0:00:47.585 ********** 2026-03-13 01:01:53.213613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-13 01:01:53.213622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-13 01:01:53.213631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-13 01:01:53.213643 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:01:53.213656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-13 01:01:53.213668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-13 01:01:53.213677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-13 01:01:53.213685 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:01:53.213693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-13 01:01:53.213702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-13 01:01:53.213715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-13 01:01:53.213724 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:01:53.213731 | orchestrator | 2026-03-13 01:01:53.213739 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-03-13 01:01:53.213747 | orchestrator | Friday 13 March 2026 01:00:41 +0000 (0:00:01.737) 0:00:49.323 ********** 2026-03-13 01:01:53.213760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-13 01:01:53.213772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-13 01:01:53.213780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-13 01:01:53.213788 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:01:53.213796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-13 01:01:53.213813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-13 01:01:53.213821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-13 01:01:53.213829 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:01:53.213846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-13 01:01:53.213855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-13 01:01:53.213863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-13 01:01:53.213871 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:01:53.213879 | orchestrator | 2026-03-13 01:01:53.213887 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-03-13 01:01:53.213988 | orchestrator | Friday 13 March 2026 01:00:42 +0000 (0:00:01.018) 0:00:50.341 ********** 2026-03-13 01:01:53.214000 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-13 01:01:53.214070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-13 01:01:53.214087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-13 01:01:53.214096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-13 01:01:53.214104 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-13 01:01:53.214118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-13 01:01:53.214126 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-13 01:01:53.214141 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-13 01:01:53.214149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-13 01:01:53.214158 | orchestrator | 2026-03-13 01:01:53.214190 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-03-13 01:01:53.214199 | orchestrator | Friday 13 March 2026 01:00:45 +0000 (0:00:03.157) 0:00:53.499 ********** 2026-03-13 01:01:53.214207 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:01:53.214216 | orchestrator | changed: [testbed-node-2] 2026-03-13 01:01:53.214223 | orchestrator | changed: [testbed-node-1] 2026-03-13 01:01:53.214231 | orchestrator | 2026-03-13 01:01:53.214239 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-03-13 01:01:53.214247 | orchestrator | Friday 13 March 2026 01:00:47 +0000 (0:00:02.396) 0:00:55.895 ********** 2026-03-13 01:01:53.214255 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-13 01:01:53.214262 | orchestrator | 2026-03-13 01:01:53.214270 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-03-13 01:01:53.214278 | orchestrator | Friday 13 March 2026 01:00:48 +0000 (0:00:01.058) 0:00:56.954 ********** 2026-03-13 01:01:53.214286 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:01:53.214294 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:01:53.214301 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:01:53.214309 | orchestrator | 2026-03-13 01:01:53.214317 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-03-13 01:01:53.214330 | orchestrator | Friday 13 March 2026 01:00:49 +0000 (0:00:01.103) 0:00:58.057 ********** 2026-03-13 01:01:53.214338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-13 01:01:53.214347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-13 01:01:53.214360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-13 01:01:53.214372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-13 01:01:53.214381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-13 01:01:53.214393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-13 01:01:53.214402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-13 01:01:53.214410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-13 01:01:53.214418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-13 01:01:53.214426 | orchestrator | 2026-03-13 01:01:53.214434 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-03-13 01:01:53.214441 | orchestrator | Friday 13 March 2026 01:00:57 +0000 (0:00:07.801) 0:01:05.859 ********** 2026-03-13 01:01:53.214458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-13 01:01:53.214467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-13 01:01:53.214480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-13 01:01:53.214488 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:01:53.214496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-13 01:01:53.214505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-13 01:01:53.214517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-13 01:01:53.214526 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:01:53.214537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-13 01:01:53.214550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-13 01:01:53.214559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-13 01:01:53.214567 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:01:53.214575 | orchestrator | 2026-03-13 01:01:53.214583 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-03-13 01:01:53.214591 | orchestrator | Friday 13 March 2026 01:00:58 +0000 (0:00:00.731) 0:01:06.590 ********** 2026-03-13 01:01:53.214599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-13 01:01:53.214613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-13 01:01:53.214707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-13 01:01:53.214726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-13 01:01:53.214736 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-13 01:01:53.214746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-13 01:01:53.214756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-13 01:01:53.214772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-13 01:01:53.214794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-13 01:01:53.214803 | orchestrator | 2026-03-13 01:01:53.214812 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-13 01:01:53.214822 | orchestrator | Friday 13 March 2026 01:01:02 +0000 (0:00:03.915) 0:01:10.506 ********** 2026-03-13 01:01:53.214831 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:01:53.214841 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:01:53.214851 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:01:53.214860 | orchestrator | 2026-03-13 01:01:53.214869 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-03-13 01:01:53.214877 | orchestrator | Friday 13 March 2026 01:01:03 +0000 (0:00:00.642) 0:01:11.149 ********** 2026-03-13 01:01:53.214885 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:01:53.214892 | orchestrator | 2026-03-13 01:01:53.214900 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-03-13 01:01:53.214908 | orchestrator | Friday 13 March 2026 01:01:05 +0000 (0:00:02.898) 0:01:14.047 ********** 2026-03-13 01:01:53.214916 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:01:53.214923 | orchestrator | 2026-03-13 01:01:53.214931 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-03-13 01:01:53.214939 | orchestrator | Friday 13 March 2026 01:01:08 +0000 (0:00:02.656) 0:01:16.703 ********** 2026-03-13 01:01:53.214947 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:01:53.214954 | orchestrator | 2026-03-13 01:01:53.214962 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-13 01:01:53.214970 | orchestrator | Friday 13 March 2026 01:01:18 +0000 (0:00:09.971) 0:01:26.675 ********** 2026-03-13 01:01:53.214978 | orchestrator | 2026-03-13 01:01:53.214986 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-13 01:01:53.214993 | orchestrator | Friday 13 March 2026 01:01:18 +0000 (0:00:00.063) 0:01:26.739 ********** 2026-03-13 01:01:53.215001 | orchestrator | 2026-03-13 01:01:53.215009 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-13 01:01:53.215017 | orchestrator | Friday 13 March 2026 01:01:18 +0000 (0:00:00.059) 0:01:26.798 ********** 2026-03-13 01:01:53.215024 | orchestrator | 2026-03-13 01:01:53.215033 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-03-13 01:01:53.215041 | orchestrator | Friday 13 March 2026 01:01:18 +0000 (0:00:00.062) 0:01:26.861 ********** 2026-03-13 01:01:53.215049 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:01:53.215056 | orchestrator | changed: [testbed-node-2] 2026-03-13 01:01:53.215064 | orchestrator | changed: [testbed-node-1] 2026-03-13 01:01:53.215072 | orchestrator | 2026-03-13 01:01:53.215080 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-03-13 01:01:53.215088 | orchestrator | Friday 13 March 2026 01:01:30 +0000 (0:00:11.814) 0:01:38.675 ********** 2026-03-13 01:01:53.215095 | orchestrator | changed: [testbed-node-2] 2026-03-13 01:01:53.215103 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:01:53.215111 | orchestrator | changed: [testbed-node-1] 2026-03-13 01:01:53.215119 | orchestrator | 2026-03-13 01:01:53.215126 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-03-13 01:01:53.215134 | orchestrator | Friday 13 March 2026 01:01:40 +0000 (0:00:09.606) 0:01:48.281 ********** 2026-03-13 01:01:53.215142 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:01:53.215150 | orchestrator | changed: [testbed-node-1] 2026-03-13 01:01:53.215157 | orchestrator | changed: [testbed-node-2] 2026-03-13 01:01:53.215222 | orchestrator | 2026-03-13 01:01:53.215301 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 01:01:53.215314 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-13 01:01:53.215324 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-13 01:01:53.215331 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-13 01:01:53.215339 | orchestrator | 2026-03-13 01:01:53.215347 | orchestrator | 2026-03-13 01:01:53.215355 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 01:01:53.215363 | orchestrator | Friday 13 March 2026 01:01:50 +0000 (0:00:10.778) 0:01:59.059 ********** 2026-03-13 01:01:53.215371 | orchestrator | =============================================================================== 2026-03-13 01:01:53.215378 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.69s 2026-03-13 01:01:53.215393 | orchestrator | barbican : Restart barbican-api container ------------------------------ 11.81s 2026-03-13 01:01:53.215402 | orchestrator | barbican : Restart barbican-worker container --------------------------- 10.78s 2026-03-13 01:01:53.215410 | orchestrator | barbican : Running barbican bootstrap container ------------------------- 9.97s 2026-03-13 01:01:53.215418 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 9.61s 2026-03-13 01:01:53.215426 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 7.80s 2026-03-13 01:01:53.215433 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.71s 2026-03-13 01:01:53.215441 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 4.75s 2026-03-13 01:01:53.215449 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.92s 2026-03-13 01:01:53.215462 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.73s 2026-03-13 01:01:53.215470 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 3.54s 2026-03-13 01:01:53.215478 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.32s 2026-03-13 01:01:53.215486 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.17s 2026-03-13 01:01:53.215494 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.16s 2026-03-13 01:01:53.215502 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.90s 2026-03-13 01:01:53.215510 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.66s 2026-03-13 01:01:53.215517 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.40s 2026-03-13 01:01:53.215525 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.23s 2026-03-13 01:01:53.215533 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 1.74s 2026-03-13 01:01:53.215541 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.69s 2026-03-13 01:01:53.215549 | orchestrator | 2026-03-13 01:01:53 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:01:53.215557 | orchestrator | 2026-03-13 01:01:53 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:01:53.215565 | orchestrator | 2026-03-13 01:01:53 | INFO  | Task 3f4a09a0-57f8-4215-b51e-2a6b7f3711cd is in state STARTED 2026-03-13 01:01:53.215922 | orchestrator | 2026-03-13 01:01:53 | INFO  | Task 3eb77512-c525-4ce2-94ab-461caeaa5be9 is in state STARTED 2026-03-13 01:01:53.216016 | orchestrator | 2026-03-13 01:01:53 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:01:56.241533 | orchestrator | 2026-03-13 01:01:56 | INFO  | Task f86e2e9e-0398-4f02-b2b4-c444328aca9a is in state STARTED 2026-03-13 01:01:56.241715 | orchestrator | 2026-03-13 01:01:56 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:01:56.242866 | orchestrator | 2026-03-13 01:01:56 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:01:56.243393 | orchestrator | 2026-03-13 01:01:56 | INFO  | Task 3f4a09a0-57f8-4215-b51e-2a6b7f3711cd is in state STARTED 2026-03-13 01:01:56.243780 | orchestrator | 2026-03-13 01:01:56 | INFO  | Task 3eb77512-c525-4ce2-94ab-461caeaa5be9 is in state SUCCESS 2026-03-13 01:01:56.244281 | orchestrator | 2026-03-13 01:01:56 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:01:59.274284 | orchestrator | 2026-03-13 01:01:59 | INFO  | Task f86e2e9e-0398-4f02-b2b4-c444328aca9a is in state STARTED 2026-03-13 01:01:59.275258 | orchestrator | 2026-03-13 01:01:59 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:01:59.277992 | orchestrator | 2026-03-13 01:01:59 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:01:59.278585 | orchestrator | 2026-03-13 01:01:59 | INFO  | Task 3f4a09a0-57f8-4215-b51e-2a6b7f3711cd is in state STARTED 2026-03-13 01:01:59.278625 | orchestrator | 2026-03-13 01:01:59 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:02:02.316143 | orchestrator | 2026-03-13 01:02:02 | INFO  | Task f86e2e9e-0398-4f02-b2b4-c444328aca9a is in state STARTED 2026-03-13 01:02:02.316627 | orchestrator | 2026-03-13 01:02:02 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:02:02.317663 | orchestrator | 2026-03-13 01:02:02 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:02:02.318746 | orchestrator | 2026-03-13 01:02:02 | INFO  | Task 3f4a09a0-57f8-4215-b51e-2a6b7f3711cd is in state STARTED 2026-03-13 01:02:02.318776 | orchestrator | 2026-03-13 01:02:02 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:02:05.362671 | orchestrator | 2026-03-13 01:02:05 | INFO  | Task f86e2e9e-0398-4f02-b2b4-c444328aca9a is in state STARTED 2026-03-13 01:02:05.363208 | orchestrator | 2026-03-13 01:02:05 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:02:05.364374 | orchestrator | 2026-03-13 01:02:05 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:02:05.365320 | orchestrator | 2026-03-13 01:02:05 | INFO  | Task 3f4a09a0-57f8-4215-b51e-2a6b7f3711cd is in state STARTED 2026-03-13 01:02:05.365350 | orchestrator | 2026-03-13 01:02:05 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:02:08.407536 | orchestrator | 2026-03-13 01:02:08 | INFO  | Task f86e2e9e-0398-4f02-b2b4-c444328aca9a is in state STARTED 2026-03-13 01:02:08.409466 | orchestrator | 2026-03-13 01:02:08 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:02:08.411824 | orchestrator | 2026-03-13 01:02:08 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:02:08.414275 | orchestrator | 2026-03-13 01:02:08 | INFO  | Task 3f4a09a0-57f8-4215-b51e-2a6b7f3711cd is in state STARTED 2026-03-13 01:02:08.414330 | orchestrator | 2026-03-13 01:02:08 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:02:11.456387 | orchestrator | 2026-03-13 01:02:11 | INFO  | Task f86e2e9e-0398-4f02-b2b4-c444328aca9a is in state STARTED 2026-03-13 01:02:11.458268 | orchestrator | 2026-03-13 01:02:11 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:02:11.460302 | orchestrator | 2026-03-13 01:02:11 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:02:11.461115 | orchestrator | 2026-03-13 01:02:11 | INFO  | Task 3f4a09a0-57f8-4215-b51e-2a6b7f3711cd is in state STARTED 2026-03-13 01:02:11.461284 | orchestrator | 2026-03-13 01:02:11 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:02:14.502274 | orchestrator | 2026-03-13 01:02:14 | INFO  | Task f86e2e9e-0398-4f02-b2b4-c444328aca9a is in state STARTED 2026-03-13 01:02:14.505210 | orchestrator | 2026-03-13 01:02:14 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:02:14.506947 | orchestrator | 2026-03-13 01:02:14 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:02:14.508187 | orchestrator | 2026-03-13 01:02:14 | INFO  | Task 3f4a09a0-57f8-4215-b51e-2a6b7f3711cd is in state STARTED 2026-03-13 01:02:14.508670 | orchestrator | 2026-03-13 01:02:14 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:02:17.552287 | orchestrator | 2026-03-13 01:02:17 | INFO  | Task f86e2e9e-0398-4f02-b2b4-c444328aca9a is in state STARTED 2026-03-13 01:02:17.554716 | orchestrator | 2026-03-13 01:02:17 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:02:17.555876 | orchestrator | 2026-03-13 01:02:17 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:02:17.556723 | orchestrator | 2026-03-13 01:02:17 | INFO  | Task 3f4a09a0-57f8-4215-b51e-2a6b7f3711cd is in state STARTED 2026-03-13 01:02:17.556764 | orchestrator | 2026-03-13 01:02:17 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:02:20.602638 | orchestrator | 2026-03-13 01:02:20 | INFO  | Task f86e2e9e-0398-4f02-b2b4-c444328aca9a is in state STARTED 2026-03-13 01:02:20.604400 | orchestrator | 2026-03-13 01:02:20 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:02:20.606586 | orchestrator | 2026-03-13 01:02:20 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:02:20.608342 | orchestrator | 2026-03-13 01:02:20 | INFO  | Task 3f4a09a0-57f8-4215-b51e-2a6b7f3711cd is in state STARTED 2026-03-13 01:02:20.608442 | orchestrator | 2026-03-13 01:02:20 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:02:23.634678 | orchestrator | 2026-03-13 01:02:23 | INFO  | Task f86e2e9e-0398-4f02-b2b4-c444328aca9a is in state STARTED 2026-03-13 01:02:23.635774 | orchestrator | 2026-03-13 01:02:23 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:02:23.639409 | orchestrator | 2026-03-13 01:02:23 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:02:23.640811 | orchestrator | 2026-03-13 01:02:23 | INFO  | Task 3f4a09a0-57f8-4215-b51e-2a6b7f3711cd is in state STARTED 2026-03-13 01:02:23.640840 | orchestrator | 2026-03-13 01:02:23 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:02:26.682795 | orchestrator | 2026-03-13 01:02:26 | INFO  | Task f86e2e9e-0398-4f02-b2b4-c444328aca9a is in state STARTED 2026-03-13 01:02:26.685060 | orchestrator | 2026-03-13 01:02:26 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:02:26.687326 | orchestrator | 2026-03-13 01:02:26 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:02:26.688912 | orchestrator | 2026-03-13 01:02:26 | INFO  | Task 3f4a09a0-57f8-4215-b51e-2a6b7f3711cd is in state STARTED 2026-03-13 01:02:26.689366 | orchestrator | 2026-03-13 01:02:26 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:02:29.723687 | orchestrator | 2026-03-13 01:02:29 | INFO  | Task f86e2e9e-0398-4f02-b2b4-c444328aca9a is in state STARTED 2026-03-13 01:02:29.725925 | orchestrator | 2026-03-13 01:02:29 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:02:29.728009 | orchestrator | 2026-03-13 01:02:29 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:02:29.729915 | orchestrator | 2026-03-13 01:02:29 | INFO  | Task 3f4a09a0-57f8-4215-b51e-2a6b7f3711cd is in state STARTED 2026-03-13 01:02:29.729970 | orchestrator | 2026-03-13 01:02:29 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:02:32.765676 | orchestrator | 2026-03-13 01:02:32 | INFO  | Task f86e2e9e-0398-4f02-b2b4-c444328aca9a is in state STARTED 2026-03-13 01:02:32.767377 | orchestrator | 2026-03-13 01:02:32 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:02:32.769139 | orchestrator | 2026-03-13 01:02:32 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:02:32.770361 | orchestrator | 2026-03-13 01:02:32 | INFO  | Task 3f4a09a0-57f8-4215-b51e-2a6b7f3711cd is in state STARTED 2026-03-13 01:02:32.770748 | orchestrator | 2026-03-13 01:02:32 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:02:35.807401 | orchestrator | 2026-03-13 01:02:35 | INFO  | Task f86e2e9e-0398-4f02-b2b4-c444328aca9a is in state STARTED 2026-03-13 01:02:35.808814 | orchestrator | 2026-03-13 01:02:35 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:02:35.810894 | orchestrator | 2026-03-13 01:02:35 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:02:35.812151 | orchestrator | 2026-03-13 01:02:35 | INFO  | Task 3f4a09a0-57f8-4215-b51e-2a6b7f3711cd is in state STARTED 2026-03-13 01:02:35.812360 | orchestrator | 2026-03-13 01:02:35 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:02:38.848846 | orchestrator | 2026-03-13 01:02:38 | INFO  | Task f86e2e9e-0398-4f02-b2b4-c444328aca9a is in state STARTED 2026-03-13 01:02:38.850661 | orchestrator | 2026-03-13 01:02:38 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:02:38.853212 | orchestrator | 2026-03-13 01:02:38 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:02:38.855622 | orchestrator | 2026-03-13 01:02:38 | INFO  | Task 3f4a09a0-57f8-4215-b51e-2a6b7f3711cd is in state STARTED 2026-03-13 01:02:38.855687 | orchestrator | 2026-03-13 01:02:38 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:02:41.900594 | orchestrator | 2026-03-13 01:02:41 | INFO  | Task f86e2e9e-0398-4f02-b2b4-c444328aca9a is in state SUCCESS 2026-03-13 01:02:41.900639 | orchestrator | 2026-03-13 01:02:41.900644 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-13 01:02:41.900647 | orchestrator | 2.16.14 2026-03-13 01:02:41.900651 | orchestrator | 2026-03-13 01:02:41.900654 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-03-13 01:02:41.900658 | orchestrator | 2026-03-13 01:02:41.900665 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-03-13 01:02:41.900668 | orchestrator | Friday 13 March 2026 01:00:26 +0000 (0:00:00.266) 0:00:00.266 ********** 2026-03-13 01:02:41.900671 | orchestrator | changed: [testbed-manager] 2026-03-13 01:02:41.900689 | orchestrator | 2026-03-13 01:02:41.900693 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-03-13 01:02:41.900697 | orchestrator | Friday 13 March 2026 01:00:28 +0000 (0:00:01.506) 0:00:01.772 ********** 2026-03-13 01:02:41.900700 | orchestrator | changed: [testbed-manager] 2026-03-13 01:02:41.900703 | orchestrator | 2026-03-13 01:02:41.900706 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-03-13 01:02:41.900710 | orchestrator | Friday 13 March 2026 01:00:29 +0000 (0:00:00.930) 0:00:02.703 ********** 2026-03-13 01:02:41.900713 | orchestrator | changed: [testbed-manager] 2026-03-13 01:02:41.900726 | orchestrator | 2026-03-13 01:02:41.900730 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-03-13 01:02:41.900733 | orchestrator | Friday 13 March 2026 01:00:30 +0000 (0:00:00.991) 0:00:03.694 ********** 2026-03-13 01:02:41.900736 | orchestrator | changed: [testbed-manager] 2026-03-13 01:02:41.900739 | orchestrator | 2026-03-13 01:02:41.900742 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-03-13 01:02:41.900745 | orchestrator | Friday 13 March 2026 01:00:31 +0000 (0:00:01.045) 0:00:04.739 ********** 2026-03-13 01:02:41.900748 | orchestrator | changed: [testbed-manager] 2026-03-13 01:02:41.900751 | orchestrator | 2026-03-13 01:02:41.900754 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-03-13 01:02:41.900757 | orchestrator | Friday 13 March 2026 01:00:32 +0000 (0:00:01.124) 0:00:05.864 ********** 2026-03-13 01:02:41.900760 | orchestrator | changed: [testbed-manager] 2026-03-13 01:02:41.900763 | orchestrator | 2026-03-13 01:02:41.900766 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-03-13 01:02:41.900769 | orchestrator | Friday 13 March 2026 01:00:33 +0000 (0:00:01.188) 0:00:07.053 ********** 2026-03-13 01:02:41.900772 | orchestrator | changed: [testbed-manager] 2026-03-13 01:02:41.900775 | orchestrator | 2026-03-13 01:02:41.900778 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-03-13 01:02:41.900781 | orchestrator | Friday 13 March 2026 01:00:35 +0000 (0:00:02.045) 0:00:09.099 ********** 2026-03-13 01:02:41.900784 | orchestrator | changed: [testbed-manager] 2026-03-13 01:02:41.900787 | orchestrator | 2026-03-13 01:02:41.900790 | orchestrator | TASK [Create admin user] ******************************************************* 2026-03-13 01:02:41.900793 | orchestrator | Friday 13 March 2026 01:00:36 +0000 (0:00:01.188) 0:00:10.287 ********** 2026-03-13 01:02:41.900802 | orchestrator | changed: [testbed-manager] 2026-03-13 01:02:41.900805 | orchestrator | 2026-03-13 01:02:41.900808 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-03-13 01:02:41.900811 | orchestrator | Friday 13 March 2026 01:01:29 +0000 (0:00:52.782) 0:01:03.069 ********** 2026-03-13 01:02:41.900814 | orchestrator | skipping: [testbed-manager] 2026-03-13 01:02:41.900817 | orchestrator | 2026-03-13 01:02:41.900820 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-13 01:02:41.900823 | orchestrator | 2026-03-13 01:02:41.900826 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-13 01:02:41.900829 | orchestrator | Friday 13 March 2026 01:01:29 +0000 (0:00:00.125) 0:01:03.195 ********** 2026-03-13 01:02:41.900832 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:02:41.900835 | orchestrator | 2026-03-13 01:02:41.900838 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-13 01:02:41.900841 | orchestrator | 2026-03-13 01:02:41.900844 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-13 01:02:41.900847 | orchestrator | Friday 13 March 2026 01:01:30 +0000 (0:00:01.343) 0:01:04.539 ********** 2026-03-13 01:02:41.900850 | orchestrator | changed: [testbed-node-1] 2026-03-13 01:02:41.900853 | orchestrator | 2026-03-13 01:02:41.900856 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-13 01:02:41.900859 | orchestrator | 2026-03-13 01:02:41.900862 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-13 01:02:41.900865 | orchestrator | Friday 13 March 2026 01:01:42 +0000 (0:00:11.469) 0:01:16.009 ********** 2026-03-13 01:02:41.900868 | orchestrator | changed: [testbed-node-2] 2026-03-13 01:02:41.900871 | orchestrator | 2026-03-13 01:02:41.900874 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 01:02:41.900878 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-13 01:02:41.900882 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 01:02:41.900888 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 01:02:41.900891 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 01:02:41.900894 | orchestrator | 2026-03-13 01:02:41.900897 | orchestrator | 2026-03-13 01:02:41.900900 | orchestrator | 2026-03-13 01:02:41.900903 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 01:02:41.900906 | orchestrator | Friday 13 March 2026 01:01:53 +0000 (0:00:11.203) 0:01:27.213 ********** 2026-03-13 01:02:41.900922 | orchestrator | =============================================================================== 2026-03-13 01:02:41.900930 | orchestrator | Create admin user ------------------------------------------------------ 52.78s 2026-03-13 01:02:41.900935 | orchestrator | Restart ceph manager service ------------------------------------------- 24.02s 2026-03-13 01:02:41.900940 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.05s 2026-03-13 01:02:41.900945 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.51s 2026-03-13 01:02:41.900950 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.19s 2026-03-13 01:02:41.900955 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.19s 2026-03-13 01:02:41.900960 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.12s 2026-03-13 01:02:41.900965 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.05s 2026-03-13 01:02:41.900970 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.99s 2026-03-13 01:02:41.900974 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.93s 2026-03-13 01:02:41.900999 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.13s 2026-03-13 01:02:41.901007 | orchestrator | 2026-03-13 01:02:41.901490 | orchestrator | 2026-03-13 01:02:41.901530 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-13 01:02:41.901539 | orchestrator | 2026-03-13 01:02:41.901546 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-13 01:02:41.901553 | orchestrator | Friday 13 March 2026 00:59:52 +0000 (0:00:00.254) 0:00:00.254 ********** 2026-03-13 01:02:41.901559 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:02:41.901565 | orchestrator | ok: [testbed-node-1] 2026-03-13 01:02:41.901571 | orchestrator | ok: [testbed-node-2] 2026-03-13 01:02:41.901578 | orchestrator | 2026-03-13 01:02:41.901583 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-13 01:02:41.901589 | orchestrator | Friday 13 March 2026 00:59:52 +0000 (0:00:00.400) 0:00:00.654 ********** 2026-03-13 01:02:41.901596 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-03-13 01:02:41.901602 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-03-13 01:02:41.901608 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-03-13 01:02:41.901614 | orchestrator | 2026-03-13 01:02:41.901620 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-03-13 01:02:41.901626 | orchestrator | 2026-03-13 01:02:41.901633 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-13 01:02:41.901639 | orchestrator | Friday 13 March 2026 00:59:53 +0000 (0:00:00.426) 0:00:01.080 ********** 2026-03-13 01:02:41.901646 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 01:02:41.901654 | orchestrator | 2026-03-13 01:02:41.901663 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-03-13 01:02:41.901669 | orchestrator | Friday 13 March 2026 00:59:53 +0000 (0:00:00.492) 0:00:01.573 ********** 2026-03-13 01:02:41.902136 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-03-13 01:02:41.902156 | orchestrator | 2026-03-13 01:02:41.902163 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-03-13 01:02:41.902181 | orchestrator | Friday 13 March 2026 00:59:57 +0000 (0:00:04.101) 0:00:05.675 ********** 2026-03-13 01:02:41.902188 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-03-13 01:02:41.902195 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-03-13 01:02:41.902201 | orchestrator | 2026-03-13 01:02:41.902208 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-03-13 01:02:41.902214 | orchestrator | Friday 13 March 2026 01:00:04 +0000 (0:00:06.851) 0:00:12.526 ********** 2026-03-13 01:02:41.902220 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-03-13 01:02:41.902227 | orchestrator | 2026-03-13 01:02:41.902583 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-03-13 01:02:41.902596 | orchestrator | Friday 13 March 2026 01:00:07 +0000 (0:00:03.423) 0:00:15.950 ********** 2026-03-13 01:02:41.902603 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-03-13 01:02:41.902610 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-13 01:02:41.902616 | orchestrator | 2026-03-13 01:02:41.902623 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-03-13 01:02:41.902629 | orchestrator | Friday 13 March 2026 01:00:11 +0000 (0:00:03.623) 0:00:19.574 ********** 2026-03-13 01:02:41.902636 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-13 01:02:41.902642 | orchestrator | 2026-03-13 01:02:41.902649 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-03-13 01:02:41.902656 | orchestrator | Friday 13 March 2026 01:00:14 +0000 (0:00:03.124) 0:00:22.698 ********** 2026-03-13 01:02:41.902662 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-03-13 01:02:41.902668 | orchestrator | 2026-03-13 01:02:41.902674 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-03-13 01:02:41.902681 | orchestrator | Friday 13 March 2026 01:00:18 +0000 (0:00:03.693) 0:00:26.392 ********** 2026-03-13 01:02:41.902689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-13 01:02:41.902727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-13 01:02:41.902741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-13 01:02:41.902756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-13 01:02:41.902763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-13 01:02:41.902770 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-13 01:02:41.902776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.902802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.902810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.902823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.902831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.902838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.902845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.902851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.902873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.902884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.902894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.902900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.902907 | orchestrator | 2026-03-13 01:02:41.902914 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-03-13 01:02:41.902920 | orchestrator | Friday 13 March 2026 01:00:21 +0000 (0:00:03.019) 0:00:29.411 ********** 2026-03-13 01:02:41.902926 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:02:41.902933 | orchestrator | 2026-03-13 01:02:41.902939 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-03-13 01:02:41.902946 | orchestrator | Friday 13 March 2026 01:00:21 +0000 (0:00:00.140) 0:00:29.552 ********** 2026-03-13 01:02:41.902952 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:02:41.902957 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:02:41.902964 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:02:41.902970 | orchestrator | 2026-03-13 01:02:41.902977 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-13 01:02:41.902983 | orchestrator | Friday 13 March 2026 01:00:21 +0000 (0:00:00.286) 0:00:29.838 ********** 2026-03-13 01:02:41.902990 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 01:02:41.902997 | orchestrator | 2026-03-13 01:02:41.903003 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-03-13 01:02:41.903009 | orchestrator | Friday 13 March 2026 01:00:22 +0000 (0:00:00.726) 0:00:30.564 ********** 2026-03-13 01:02:41.903016 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-13 01:02:41.903046 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-13 01:02:41.903056 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-13 01:02:41.903064 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-13 01:02:41.903071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-13 01:02:41.903077 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-13 01:02:41.903138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.903158 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.903166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.903176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.903183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.903190 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.903197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.903225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.903233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.903242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.903251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.903258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.903266 | orchestrator | 2026-03-13 01:02:41.903275 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-03-13 01:02:41.903285 | orchestrator | Friday 13 March 2026 01:00:28 +0000 (0:00:06.011) 0:00:36.575 ********** 2026-03-13 01:02:41.903294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-13 01:02:41.903306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-13 01:02:41.903332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-13 01:02:41.903341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-13 01:02:41.903353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-13 01:02:41.903361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-13 01:02:41.903369 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:02:41.903377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-13 01:02:41.903390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-13 01:02:41.903415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-13 01:02:41.903424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-13 01:02:41.903436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-13 01:02:41.903445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-13 01:02:41.903453 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:02:41.903461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-13 01:02:41.903472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-13 01:02:41.903496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-13 01:02:41.903504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-13 01:02:41.903514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-13 01:02:41.903521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-13 01:02:41.903527 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:02:41.903534 | orchestrator | 2026-03-13 01:02:41.903541 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-03-13 01:02:41.903548 | orchestrator | Friday 13 March 2026 01:00:29 +0000 (0:00:00.834) 0:00:37.410 ********** 2026-03-13 01:02:41.903555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-13 01:02:41.903566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-13 01:02:41.903589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-13 01:02:41.903597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-13 01:02:41.903613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-13 01:02:41.903620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-13 01:02:41.903627 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:02:41.903634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-13 01:02:41.903644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-13 01:02:41.903668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-13 01:02:41.903676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-13 01:02:41.903686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-13 01:02:41.903693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-13 01:02:41.903706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-13 01:02:41.903713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-13 01:02:41.903719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-13 01:02:41.903745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-13 01:02:41.903753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-13 01:02:41.903760 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:02:41.903771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-13 01:02:41.903778 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:02:41.903785 | orchestrator | 2026-03-13 01:02:41.903791 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-03-13 01:02:41.903803 | orchestrator | Friday 13 March 2026 01:00:32 +0000 (0:00:02.620) 0:00:40.031 ********** 2026-03-13 01:02:41.903810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-13 01:02:41.903817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-13 01:02:41.903841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-13 01:02:41.903848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-13 01:02:41.903858 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-13 01:02:41.903868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-13 01:02:41.903875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.903881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.903904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.903911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.903917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.903925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.903935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.903942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.903949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.903973 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.903980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.903990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.904002 | orchestrator | 2026-03-13 01:02:41.904009 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-03-13 01:02:41.904015 | orchestrator | Friday 13 March 2026 01:00:38 +0000 (0:00:06.037) 0:00:46.068 ********** 2026-03-13 01:02:41.904022 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-13 01:02:41.904029 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-13 01:02:41.904036 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-13 01:02:41.904047 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-13 01:02:41.904054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-13 01:02:41.904068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-13 01:02:41.904076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.904083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.904106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.904119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.904126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.904135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.904146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.904154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.904162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.904168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.904179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.904186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.904196 | orchestrator | 2026-03-13 01:02:41.904203 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-03-13 01:02:41.904210 | orchestrator | Friday 13 March 2026 01:00:56 +0000 (0:00:18.015) 0:01:04.084 ********** 2026-03-13 01:02:41.904217 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-13 01:02:41.904224 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-13 01:02:41.904229 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-13 01:02:41.904235 | orchestrator | 2026-03-13 01:02:41.904243 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-03-13 01:02:41.904250 | orchestrator | Friday 13 March 2026 01:01:01 +0000 (0:00:05.456) 0:01:09.541 ********** 2026-03-13 01:02:41.904256 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-13 01:02:41.904263 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-13 01:02:41.904269 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-13 01:02:41.904276 | orchestrator | 2026-03-13 01:02:41.904282 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-03-13 01:02:41.904289 | orchestrator | Friday 13 March 2026 01:01:04 +0000 (0:00:03.370) 0:01:12.912 ********** 2026-03-13 01:02:41.904296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-13 01:02:41.904302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-13 01:02:41.904314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-13 01:02:41.904325 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-13 01:02:41.904337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-13 01:02:41.904344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-13 01:02:41.904351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-13 01:02:41.904358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-13 01:02:41.904365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-13 01:02:41.904379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-13 01:02:41.904386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-13 01:02:41.904396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-13 01:02:41.904403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-13 01:02:41.904410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-13 01:02:41.904417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-13 01:02:41.904424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.904439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.904446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.904452 | orchestrator | 2026-03-13 01:02:41.904462 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-03-13 01:02:41.904469 | orchestrator | Friday 13 March 2026 01:01:08 +0000 (0:00:03.984) 0:01:16.896 ********** 2026-03-13 01:02:41.904475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-13 01:02:41.904482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-13 01:02:41.904489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-13 01:02:41.904504 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-13 01:02:41.904511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-13 01:02:41.904520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-13 01:02:41.904527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-13 01:02:41.904534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-13 01:02:41.904541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-13 01:02:41.904554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-13 01:02:41.904561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-13 01:02:41.904571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-13 01:02:41.904578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-13 01:02:41.904584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-13 01:02:41.904591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-13 01:02:41.904597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.904613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.904620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.904627 | orchestrator | 2026-03-13 01:02:41.904634 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-13 01:02:41.904641 | orchestrator | Friday 13 March 2026 01:01:11 +0000 (0:00:02.314) 0:01:19.211 ********** 2026-03-13 01:02:41.904648 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:02:41.904654 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:02:41.904661 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:02:41.904668 | orchestrator | 2026-03-13 01:02:41.904674 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-03-13 01:02:41.904681 | orchestrator | Friday 13 March 2026 01:01:11 +0000 (0:00:00.368) 0:01:19.580 ********** 2026-03-13 01:02:41.904692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-13 01:02:41.904699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-13 01:02:41.904706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-13 01:02:41.904719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-13 01:02:41.904730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-13 01:02:41.904737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-13 01:02:41.904744 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:02:41.904754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-13 01:02:41.904761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-13 01:02:41.904772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-13 01:02:41.904779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-13 01:02:41.904789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-13 01:02:41.904796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-13 01:02:41.904805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-13 01:02:41.904812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-13 01:02:41.904819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-13 01:02:41.904833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-13 01:02:41.904840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-13 01:02:41.904847 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:02:41.904857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-13 01:02:41.904864 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:02:41.904870 | orchestrator | 2026-03-13 01:02:41.904877 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-03-13 01:02:41.904884 | orchestrator | Friday 13 March 2026 01:01:12 +0000 (0:00:00.828) 0:01:20.408 ********** 2026-03-13 01:02:41.904891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-13 01:02:41.904898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-13 01:02:41.904908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-13 01:02:41.904916 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-13 01:02:41.904950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-13 01:02:41.904961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-13 01:02:41.904968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.904975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.904986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.904993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.905003 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.905010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.905019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.905026 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.905036 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.905043 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.905050 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.905061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-13 01:02:41.905068 | orchestrator | 2026-03-13 01:02:41.905074 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-13 01:02:41.905081 | orchestrator | Friday 13 March 2026 01:01:16 +0000 (0:00:04.186) 0:01:24.595 ********** 2026-03-13 01:02:41.905100 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:02:41.905108 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:02:41.905114 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:02:41.905121 | orchestrator | 2026-03-13 01:02:41.905128 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-03-13 01:02:41.905134 | orchestrator | Friday 13 March 2026 01:01:16 +0000 (0:00:00.313) 0:01:24.909 ********** 2026-03-13 01:02:41.905141 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-03-13 01:02:41.905148 | orchestrator | 2026-03-13 01:02:41.905154 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-03-13 01:02:41.905161 | orchestrator | Friday 13 March 2026 01:01:18 +0000 (0:00:01.914) 0:01:26.823 ********** 2026-03-13 01:02:41.905168 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-13 01:02:41.905174 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-03-13 01:02:41.905181 | orchestrator | 2026-03-13 01:02:41.905187 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-03-13 01:02:41.905194 | orchestrator | Friday 13 March 2026 01:01:21 +0000 (0:00:02.174) 0:01:28.998 ********** 2026-03-13 01:02:41.905205 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:02:41.905212 | orchestrator | 2026-03-13 01:02:41.905218 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-13 01:02:41.905228 | orchestrator | Friday 13 March 2026 01:01:35 +0000 (0:00:14.587) 0:01:43.586 ********** 2026-03-13 01:02:41.905235 | orchestrator | 2026-03-13 01:02:41.905241 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-13 01:02:41.905248 | orchestrator | Friday 13 March 2026 01:01:35 +0000 (0:00:00.065) 0:01:43.651 ********** 2026-03-13 01:02:41.905255 | orchestrator | 2026-03-13 01:02:41.905262 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-13 01:02:41.905268 | orchestrator | Friday 13 March 2026 01:01:35 +0000 (0:00:00.051) 0:01:43.702 ********** 2026-03-13 01:02:41.905275 | orchestrator | 2026-03-13 01:02:41.905282 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-03-13 01:02:41.905288 | orchestrator | Friday 13 March 2026 01:01:35 +0000 (0:00:00.050) 0:01:43.753 ********** 2026-03-13 01:02:41.905294 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:02:41.905301 | orchestrator | changed: [testbed-node-1] 2026-03-13 01:02:41.905307 | orchestrator | changed: [testbed-node-2] 2026-03-13 01:02:41.905313 | orchestrator | 2026-03-13 01:02:41.905319 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-03-13 01:02:41.905325 | orchestrator | Friday 13 March 2026 01:01:43 +0000 (0:00:07.310) 0:01:51.064 ********** 2026-03-13 01:02:41.905331 | orchestrator | changed: [testbed-node-1] 2026-03-13 01:02:41.905337 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:02:41.905344 | orchestrator | changed: [testbed-node-2] 2026-03-13 01:02:41.905351 | orchestrator | 2026-03-13 01:02:41.905357 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-03-13 01:02:41.905364 | orchestrator | Friday 13 March 2026 01:01:54 +0000 (0:00:11.095) 0:02:02.160 ********** 2026-03-13 01:02:41.905370 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:02:41.905378 | orchestrator | changed: [testbed-node-1] 2026-03-13 01:02:41.905384 | orchestrator | changed: [testbed-node-2] 2026-03-13 01:02:41.905390 | orchestrator | 2026-03-13 01:02:41.905397 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-03-13 01:02:41.905403 | orchestrator | Friday 13 March 2026 01:02:01 +0000 (0:00:07.066) 0:02:09.226 ********** 2026-03-13 01:02:41.905410 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:02:41.905416 | orchestrator | changed: [testbed-node-2] 2026-03-13 01:02:41.905422 | orchestrator | changed: [testbed-node-1] 2026-03-13 01:02:41.905429 | orchestrator | 2026-03-13 01:02:41.905435 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-03-13 01:02:41.905441 | orchestrator | Friday 13 March 2026 01:02:11 +0000 (0:00:10.353) 0:02:19.580 ********** 2026-03-13 01:02:41.905448 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:02:41.905455 | orchestrator | changed: [testbed-node-1] 2026-03-13 01:02:41.905461 | orchestrator | changed: [testbed-node-2] 2026-03-13 01:02:41.905468 | orchestrator | 2026-03-13 01:02:41.905474 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-03-13 01:02:41.905480 | orchestrator | Friday 13 March 2026 01:02:21 +0000 (0:00:09.979) 0:02:29.559 ********** 2026-03-13 01:02:41.905487 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:02:41.905493 | orchestrator | changed: [testbed-node-1] 2026-03-13 01:02:41.905500 | orchestrator | changed: [testbed-node-2] 2026-03-13 01:02:41.905506 | orchestrator | 2026-03-13 01:02:41.905513 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-03-13 01:02:41.905519 | orchestrator | Friday 13 March 2026 01:02:31 +0000 (0:00:10.347) 0:02:39.907 ********** 2026-03-13 01:02:41.905526 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:02:41.905532 | orchestrator | 2026-03-13 01:02:41.905539 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 01:02:41.905546 | orchestrator | testbed-node-0 : ok=29  changed=24  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-13 01:02:41.905559 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-13 01:02:41.905565 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-13 01:02:41.905572 | orchestrator | 2026-03-13 01:02:41.905579 | orchestrator | 2026-03-13 01:02:41.905591 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 01:02:41.905598 | orchestrator | Friday 13 March 2026 01:02:39 +0000 (0:00:07.630) 0:02:47.537 ********** 2026-03-13 01:02:41.905715 | orchestrator | =============================================================================== 2026-03-13 01:02:41.905724 | orchestrator | designate : Copying over designate.conf -------------------------------- 18.02s 2026-03-13 01:02:41.905731 | orchestrator | designate : Running Designate bootstrap container ---------------------- 14.59s 2026-03-13 01:02:41.905737 | orchestrator | designate : Restart designate-api container ---------------------------- 11.10s 2026-03-13 01:02:41.905743 | orchestrator | designate : Restart designate-producer container ----------------------- 10.35s 2026-03-13 01:02:41.905750 | orchestrator | designate : Restart designate-worker container ------------------------- 10.35s 2026-03-13 01:02:41.905757 | orchestrator | designate : Restart designate-mdns container ---------------------------- 9.98s 2026-03-13 01:02:41.905762 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.63s 2026-03-13 01:02:41.905768 | orchestrator | designate : Restart designate-backend-bind9 container ------------------- 7.31s 2026-03-13 01:02:41.905773 | orchestrator | designate : Restart designate-central container ------------------------- 7.07s 2026-03-13 01:02:41.905779 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.85s 2026-03-13 01:02:41.905784 | orchestrator | designate : Copying over config.json files for services ----------------- 6.04s 2026-03-13 01:02:41.905790 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.01s 2026-03-13 01:02:41.905795 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 5.46s 2026-03-13 01:02:41.905806 | orchestrator | designate : Check designate containers ---------------------------------- 4.19s 2026-03-13 01:02:41.905811 | orchestrator | service-ks-register : designate | Creating services --------------------- 4.10s 2026-03-13 01:02:41.905817 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.98s 2026-03-13 01:02:41.905823 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 3.69s 2026-03-13 01:02:41.905829 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.62s 2026-03-13 01:02:41.905834 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.42s 2026-03-13 01:02:41.905840 | orchestrator | designate : Copying over named.conf ------------------------------------- 3.37s 2026-03-13 01:02:41.905845 | orchestrator | 2026-03-13 01:02:41 | INFO  | Task d1b44c6f-8cd1-4de8-81d2-12b995c7c7a0 is in state STARTED 2026-03-13 01:02:41.905856 | orchestrator | 2026-03-13 01:02:41 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:02:41.906323 | orchestrator | 2026-03-13 01:02:41 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:02:41.906944 | orchestrator | 2026-03-13 01:02:41 | INFO  | Task 3f4a09a0-57f8-4215-b51e-2a6b7f3711cd is in state STARTED 2026-03-13 01:02:41.906964 | orchestrator | 2026-03-13 01:02:41 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:02:44.950485 | orchestrator | 2026-03-13 01:02:44 | INFO  | Task d1b44c6f-8cd1-4de8-81d2-12b995c7c7a0 is in state STARTED 2026-03-13 01:02:44.951130 | orchestrator | 2026-03-13 01:02:44 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:02:44.951747 | orchestrator | 2026-03-13 01:02:44 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:02:44.952536 | orchestrator | 2026-03-13 01:02:44 | INFO  | Task 3f4a09a0-57f8-4215-b51e-2a6b7f3711cd is in state STARTED 2026-03-13 01:02:44.952603 | orchestrator | 2026-03-13 01:02:44 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:02:47.991548 | orchestrator | 2026-03-13 01:02:47 | INFO  | Task d1b44c6f-8cd1-4de8-81d2-12b995c7c7a0 is in state STARTED 2026-03-13 01:02:47.991844 | orchestrator | 2026-03-13 01:02:47 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:02:47.992892 | orchestrator | 2026-03-13 01:02:47 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:02:47.993479 | orchestrator | 2026-03-13 01:02:47 | INFO  | Task 3f4a09a0-57f8-4215-b51e-2a6b7f3711cd is in state STARTED 2026-03-13 01:02:47.993797 | orchestrator | 2026-03-13 01:02:47 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:02:51.038984 | orchestrator | 2026-03-13 01:02:51 | INFO  | Task d1b44c6f-8cd1-4de8-81d2-12b995c7c7a0 is in state STARTED 2026-03-13 01:02:51.040407 | orchestrator | 2026-03-13 01:02:51 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:02:51.042961 | orchestrator | 2026-03-13 01:02:51 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:02:51.046478 | orchestrator | 2026-03-13 01:02:51 | INFO  | Task 3f4a09a0-57f8-4215-b51e-2a6b7f3711cd is in state STARTED 2026-03-13 01:02:51.046564 | orchestrator | 2026-03-13 01:02:51 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:02:54.086546 | orchestrator | 2026-03-13 01:02:54 | INFO  | Task d1b44c6f-8cd1-4de8-81d2-12b995c7c7a0 is in state STARTED 2026-03-13 01:02:54.088247 | orchestrator | 2026-03-13 01:02:54 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:02:54.089881 | orchestrator | 2026-03-13 01:02:54 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:02:54.093329 | orchestrator | 2026-03-13 01:02:54 | INFO  | Task 3f4a09a0-57f8-4215-b51e-2a6b7f3711cd is in state STARTED 2026-03-13 01:02:54.093374 | orchestrator | 2026-03-13 01:02:54 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:02:57.138327 | orchestrator | 2026-03-13 01:02:57 | INFO  | Task d1b44c6f-8cd1-4de8-81d2-12b995c7c7a0 is in state STARTED 2026-03-13 01:02:57.142114 | orchestrator | 2026-03-13 01:02:57 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:02:57.144555 | orchestrator | 2026-03-13 01:02:57 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:02:57.146643 | orchestrator | 2026-03-13 01:02:57 | INFO  | Task 3f4a09a0-57f8-4215-b51e-2a6b7f3711cd is in state STARTED 2026-03-13 01:02:57.147298 | orchestrator | 2026-03-13 01:02:57 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:03:00.233098 | orchestrator | 2026-03-13 01:03:00 | INFO  | Task d1b44c6f-8cd1-4de8-81d2-12b995c7c7a0 is in state STARTED 2026-03-13 01:03:00.234408 | orchestrator | 2026-03-13 01:03:00 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:03:00.237681 | orchestrator | 2026-03-13 01:03:00 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:03:00.239693 | orchestrator | 2026-03-13 01:03:00 | INFO  | Task 3f4a09a0-57f8-4215-b51e-2a6b7f3711cd is in state STARTED 2026-03-13 01:03:00.239726 | orchestrator | 2026-03-13 01:03:00 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:03:03.261846 | orchestrator | 2026-03-13 01:03:03 | INFO  | Task d1b44c6f-8cd1-4de8-81d2-12b995c7c7a0 is in state STARTED 2026-03-13 01:03:03.262122 | orchestrator | 2026-03-13 01:03:03 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:03:03.262700 | orchestrator | 2026-03-13 01:03:03 | INFO  | Task 798d6f86-488f-45d2-a5e7-b7d5ae888a74 is in state STARTED 2026-03-13 01:03:03.263387 | orchestrator | 2026-03-13 01:03:03 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:03:03.264321 | orchestrator | 2026-03-13 01:03:03 | INFO  | Task 3f4a09a0-57f8-4215-b51e-2a6b7f3711cd is in state SUCCESS 2026-03-13 01:03:03.265643 | orchestrator | 2026-03-13 01:03:03.265677 | orchestrator | 2026-03-13 01:03:03.265687 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-13 01:03:03.265697 | orchestrator | 2026-03-13 01:03:03.265706 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-13 01:03:03.265714 | orchestrator | Friday 13 March 2026 01:01:57 +0000 (0:00:00.204) 0:00:00.204 ********** 2026-03-13 01:03:03.265722 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:03:03.265730 | orchestrator | ok: [testbed-node-1] 2026-03-13 01:03:03.265737 | orchestrator | ok: [testbed-node-2] 2026-03-13 01:03:03.265745 | orchestrator | 2026-03-13 01:03:03.265753 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-13 01:03:03.265760 | orchestrator | Friday 13 March 2026 01:01:58 +0000 (0:00:00.263) 0:00:00.468 ********** 2026-03-13 01:03:03.265768 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-03-13 01:03:03.265776 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-03-13 01:03:03.265784 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-03-13 01:03:03.265791 | orchestrator | 2026-03-13 01:03:03.265798 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-03-13 01:03:03.265805 | orchestrator | 2026-03-13 01:03:03.265813 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-13 01:03:03.265821 | orchestrator | Friday 13 March 2026 01:01:58 +0000 (0:00:00.346) 0:00:00.814 ********** 2026-03-13 01:03:03.265829 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 01:03:03.265838 | orchestrator | 2026-03-13 01:03:03.265846 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-03-13 01:03:03.265853 | orchestrator | Friday 13 March 2026 01:01:58 +0000 (0:00:00.409) 0:00:01.223 ********** 2026-03-13 01:03:03.265861 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-03-13 01:03:03.265870 | orchestrator | 2026-03-13 01:03:03.265878 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-03-13 01:03:03.265886 | orchestrator | Friday 13 March 2026 01:02:02 +0000 (0:00:03.688) 0:00:04.912 ********** 2026-03-13 01:03:03.265894 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-03-13 01:03:03.265902 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-03-13 01:03:03.265911 | orchestrator | 2026-03-13 01:03:03.265919 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-03-13 01:03:03.265927 | orchestrator | Friday 13 March 2026 01:02:08 +0000 (0:00:06.046) 0:00:10.959 ********** 2026-03-13 01:03:03.265935 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-13 01:03:03.265943 | orchestrator | 2026-03-13 01:03:03.265952 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-03-13 01:03:03.265959 | orchestrator | Friday 13 March 2026 01:02:11 +0000 (0:00:02.817) 0:00:13.776 ********** 2026-03-13 01:03:03.265967 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-03-13 01:03:03.265974 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-13 01:03:03.265982 | orchestrator | 2026-03-13 01:03:03.265990 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-03-13 01:03:03.265999 | orchestrator | Friday 13 March 2026 01:02:14 +0000 (0:00:03.390) 0:00:17.167 ********** 2026-03-13 01:03:03.266076 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-13 01:03:03.266088 | orchestrator | 2026-03-13 01:03:03.266097 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-03-13 01:03:03.266105 | orchestrator | Friday 13 March 2026 01:02:18 +0000 (0:00:03.297) 0:00:20.465 ********** 2026-03-13 01:03:03.266114 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-03-13 01:03:03.266121 | orchestrator | 2026-03-13 01:03:03.266129 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-13 01:03:03.266137 | orchestrator | Friday 13 March 2026 01:02:21 +0000 (0:00:03.674) 0:00:24.139 ********** 2026-03-13 01:03:03.266145 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:03:03.266153 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:03:03.266161 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:03:03.266169 | orchestrator | 2026-03-13 01:03:03.266177 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-03-13 01:03:03.266195 | orchestrator | Friday 13 March 2026 01:02:22 +0000 (0:00:00.293) 0:00:24.433 ********** 2026-03-13 01:03:03.266204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-13 01:03:03.266223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-13 01:03:03.266229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-13 01:03:03.266233 | orchestrator | 2026-03-13 01:03:03.266238 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-03-13 01:03:03.266248 | orchestrator | Friday 13 March 2026 01:02:23 +0000 (0:00:01.053) 0:00:25.487 ********** 2026-03-13 01:03:03.266253 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:03:03.266258 | orchestrator | 2026-03-13 01:03:03.266262 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-03-13 01:03:03.266267 | orchestrator | Friday 13 March 2026 01:02:23 +0000 (0:00:00.113) 0:00:25.600 ********** 2026-03-13 01:03:03.266271 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:03:03.266276 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:03:03.266280 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:03:03.266285 | orchestrator | 2026-03-13 01:03:03.266289 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-13 01:03:03.266294 | orchestrator | Friday 13 March 2026 01:02:23 +0000 (0:00:00.399) 0:00:26.000 ********** 2026-03-13 01:03:03.266299 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 01:03:03.266303 | orchestrator | 2026-03-13 01:03:03.266308 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-03-13 01:03:03.266312 | orchestrator | Friday 13 March 2026 01:02:24 +0000 (0:00:00.474) 0:00:26.475 ********** 2026-03-13 01:03:03.266320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-13 01:03:03.266332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-13 01:03:03.266341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-13 01:03:03.266360 | orchestrator | 2026-03-13 01:03:03.266368 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-03-13 01:03:03.266375 | orchestrator | Friday 13 March 2026 01:02:25 +0000 (0:00:01.412) 0:00:27.887 ********** 2026-03-13 01:03:03.266382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-13 01:03:03.266389 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:03:03.266412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-13 01:03:03.266421 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:03:03.266433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-13 01:03:03.266524 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:03:03.266660 | orchestrator | 2026-03-13 01:03:03.266682 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-03-13 01:03:03.266697 | orchestrator | Friday 13 March 2026 01:02:26 +0000 (0:00:00.705) 0:00:28.592 ********** 2026-03-13 01:03:03.266713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-13 01:03:03.266758 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:03:03.266774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-13 01:03:03.266788 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:03:03.266816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-13 01:03:03.266830 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:03:03.266844 | orchestrator | 2026-03-13 01:03:03.266857 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-03-13 01:03:03.266870 | orchestrator | Friday 13 March 2026 01:02:26 +0000 (0:00:00.592) 0:00:29.185 ********** 2026-03-13 01:03:03.266921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-13 01:03:03.266939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-13 01:03:03.266963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-13 01:03:03.266978 | orchestrator | 2026-03-13 01:03:03.266990 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-03-13 01:03:03.267002 | orchestrator | Friday 13 March 2026 01:02:28 +0000 (0:00:01.223) 0:00:30.408 ********** 2026-03-13 01:03:03.267015 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-13 01:03:03.267036 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-13 01:03:03.267081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-13 01:03:03.267106 | orchestrator | 2026-03-13 01:03:03.267118 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-03-13 01:03:03.267131 | orchestrator | Friday 13 March 2026 01:02:30 +0000 (0:00:02.216) 0:00:32.624 ********** 2026-03-13 01:03:03.267143 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-13 01:03:03.267157 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-13 01:03:03.267171 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-13 01:03:03.267184 | orchestrator | 2026-03-13 01:03:03.267197 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-03-13 01:03:03.267211 | orchestrator | Friday 13 March 2026 01:02:31 +0000 (0:00:01.416) 0:00:34.041 ********** 2026-03-13 01:03:03.267225 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:03:03.267237 | orchestrator | changed: [testbed-node-1] 2026-03-13 01:03:03.267251 | orchestrator | changed: [testbed-node-2] 2026-03-13 01:03:03.267263 | orchestrator | 2026-03-13 01:03:03.267276 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-03-13 01:03:03.267290 | orchestrator | Friday 13 March 2026 01:02:32 +0000 (0:00:01.231) 0:00:35.272 ********** 2026-03-13 01:03:03.267303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-13 01:03:03.267318 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:03:03.267337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-13 01:03:03.267351 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:03:03.267376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-13 01:03:03.267400 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:03:03.267413 | orchestrator | 2026-03-13 01:03:03.267427 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-03-13 01:03:03.267440 | orchestrator | Friday 13 March 2026 01:02:33 +0000 (0:00:00.450) 0:00:35.723 ********** 2026-03-13 01:03:03.267454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-13 01:03:03.267467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-13 01:03:03.267491 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-13 01:03:03.267507 | orchestrator | 2026-03-13 01:03:03.267520 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-03-13 01:03:03.267534 | orchestrator | Friday 13 March 2026 01:02:34 +0000 (0:00:01.037) 0:00:36.760 ********** 2026-03-13 01:03:03.267546 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:03:03.267560 | orchestrator | 2026-03-13 01:03:03.267574 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-03-13 01:03:03.267587 | orchestrator | Friday 13 March 2026 01:02:36 +0000 (0:00:02.270) 0:00:39.030 ********** 2026-03-13 01:03:03.267608 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:03:03.267621 | orchestrator | 2026-03-13 01:03:03.267635 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-03-13 01:03:03.267647 | orchestrator | Friday 13 March 2026 01:02:39 +0000 (0:00:03.068) 0:00:42.099 ********** 2026-03-13 01:03:03.267662 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:03:03.267674 | orchestrator | 2026-03-13 01:03:03.267688 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-13 01:03:03.267701 | orchestrator | Friday 13 March 2026 01:02:51 +0000 (0:00:11.465) 0:00:53.564 ********** 2026-03-13 01:03:03.267714 | orchestrator | 2026-03-13 01:03:03.267728 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-13 01:03:03.267742 | orchestrator | Friday 13 March 2026 01:02:51 +0000 (0:00:00.058) 0:00:53.623 ********** 2026-03-13 01:03:03.267755 | orchestrator | 2026-03-13 01:03:03.267778 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-13 01:03:03.267793 | orchestrator | Friday 13 March 2026 01:02:51 +0000 (0:00:00.056) 0:00:53.680 ********** 2026-03-13 01:03:03.267807 | orchestrator | 2026-03-13 01:03:03.267820 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-03-13 01:03:03.267834 | orchestrator | Friday 13 March 2026 01:02:51 +0000 (0:00:00.060) 0:00:53.740 ********** 2026-03-13 01:03:03.267848 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:03:03.267861 | orchestrator | changed: [testbed-node-2] 2026-03-13 01:03:03.267874 | orchestrator | changed: [testbed-node-1] 2026-03-13 01:03:03.267888 | orchestrator | 2026-03-13 01:03:03.267902 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 01:03:03.267918 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-13 01:03:03.267933 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-13 01:03:03.267947 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-13 01:03:03.267960 | orchestrator | 2026-03-13 01:03:03.267975 | orchestrator | 2026-03-13 01:03:03.267988 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 01:03:03.268002 | orchestrator | Friday 13 March 2026 01:03:00 +0000 (0:00:09.544) 0:01:03.284 ********** 2026-03-13 01:03:03.268016 | orchestrator | =============================================================================== 2026-03-13 01:03:03.268029 | orchestrator | placement : Running placement bootstrap container ---------------------- 11.47s 2026-03-13 01:03:03.268043 | orchestrator | placement : Restart placement-api container ----------------------------- 9.54s 2026-03-13 01:03:03.268077 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.05s 2026-03-13 01:03:03.268092 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.69s 2026-03-13 01:03:03.268105 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.67s 2026-03-13 01:03:03.268120 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.39s 2026-03-13 01:03:03.268130 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.30s 2026-03-13 01:03:03.268139 | orchestrator | placement : Creating placement databases user and setting permissions --- 3.07s 2026-03-13 01:03:03.268147 | orchestrator | service-ks-register : placement | Creating projects --------------------- 2.82s 2026-03-13 01:03:03.268155 | orchestrator | placement : Creating placement databases -------------------------------- 2.27s 2026-03-13 01:03:03.268162 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.22s 2026-03-13 01:03:03.268170 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.42s 2026-03-13 01:03:03.268178 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.41s 2026-03-13 01:03:03.268205 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.23s 2026-03-13 01:03:03.268213 | orchestrator | placement : Copying over config.json files for services ----------------- 1.22s 2026-03-13 01:03:03.268221 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.05s 2026-03-13 01:03:03.268229 | orchestrator | placement : Check placement containers ---------------------------------- 1.04s 2026-03-13 01:03:03.268237 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.71s 2026-03-13 01:03:03.268245 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.59s 2026-03-13 01:03:03.268253 | orchestrator | placement : include_tasks ----------------------------------------------- 0.47s 2026-03-13 01:03:03.268261 | orchestrator | 2026-03-13 01:03:03 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:03:06.294600 | orchestrator | 2026-03-13 01:03:06 | INFO  | Task d1b44c6f-8cd1-4de8-81d2-12b995c7c7a0 is in state STARTED 2026-03-13 01:03:06.295548 | orchestrator | 2026-03-13 01:03:06 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:03:06.296817 | orchestrator | 2026-03-13 01:03:06 | INFO  | Task 798d6f86-488f-45d2-a5e7-b7d5ae888a74 is in state STARTED 2026-03-13 01:03:06.297815 | orchestrator | 2026-03-13 01:03:06 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:03:06.297922 | orchestrator | 2026-03-13 01:03:06 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:03:09.338567 | orchestrator | 2026-03-13 01:03:09 | INFO  | Task d1b44c6f-8cd1-4de8-81d2-12b995c7c7a0 is in state STARTED 2026-03-13 01:03:09.342241 | orchestrator | 2026-03-13 01:03:09 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:03:09.343756 | orchestrator | 2026-03-13 01:03:09 | INFO  | Task 7b6a4f16-d3f8-4f5c-8d27-c9f6b0982ad9 is in state STARTED 2026-03-13 01:03:09.344895 | orchestrator | 2026-03-13 01:03:09 | INFO  | Task 798d6f86-488f-45d2-a5e7-b7d5ae888a74 is in state SUCCESS 2026-03-13 01:03:09.346203 | orchestrator | 2026-03-13 01:03:09 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:03:09.347519 | orchestrator | 2026-03-13 01:03:09 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:03:12.400635 | orchestrator | 2026-03-13 01:03:12 | INFO  | Task d1b44c6f-8cd1-4de8-81d2-12b995c7c7a0 is in state STARTED 2026-03-13 01:03:12.403343 | orchestrator | 2026-03-13 01:03:12 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:03:12.404588 | orchestrator | 2026-03-13 01:03:12 | INFO  | Task 7b6a4f16-d3f8-4f5c-8d27-c9f6b0982ad9 is in state STARTED 2026-03-13 01:03:12.405995 | orchestrator | 2026-03-13 01:03:12 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state STARTED 2026-03-13 01:03:12.406078 | orchestrator | 2026-03-13 01:03:12 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:03:15.445684 | orchestrator | 2026-03-13 01:03:15 | INFO  | Task d1b44c6f-8cd1-4de8-81d2-12b995c7c7a0 is in state STARTED 2026-03-13 01:03:15.448000 | orchestrator | 2026-03-13 01:03:15 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:03:15.450388 | orchestrator | 2026-03-13 01:03:15 | INFO  | Task a78b234f-5c53-47f3-83b5-30fcc131e144 is in state STARTED 2026-03-13 01:03:15.453104 | orchestrator | 2026-03-13 01:03:15 | INFO  | Task 7b6a4f16-d3f8-4f5c-8d27-c9f6b0982ad9 is in state STARTED 2026-03-13 01:03:15.453164 | orchestrator | 2026-03-13 01:03:15 | INFO  | Task 6a5a2c91-0142-47c0-90ed-2a4f9258ee2a is in state SUCCESS 2026-03-13 01:03:15.453178 | orchestrator | 2026-03-13 01:03:15 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:03:18.496220 | orchestrator | 2026-03-13 01:03:18 | INFO  | Task d1b44c6f-8cd1-4de8-81d2-12b995c7c7a0 is in state STARTED 2026-03-13 01:03:18.497103 | orchestrator | 2026-03-13 01:03:18 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:03:18.497244 | orchestrator | 2026-03-13 01:03:18 | INFO  | Task a78b234f-5c53-47f3-83b5-30fcc131e144 is in state STARTED 2026-03-13 01:03:18.497929 | orchestrator | 2026-03-13 01:03:18 | INFO  | Task 7b6a4f16-d3f8-4f5c-8d27-c9f6b0982ad9 is in state STARTED 2026-03-13 01:03:18.497950 | orchestrator | 2026-03-13 01:03:18 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:03:21.527437 | orchestrator | 2026-03-13 01:03:21 | INFO  | Task d1b44c6f-8cd1-4de8-81d2-12b995c7c7a0 is in state STARTED 2026-03-13 01:03:21.527555 | orchestrator | 2026-03-13 01:03:21 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:03:21.529690 | orchestrator | 2026-03-13 01:03:21 | INFO  | Task a78b234f-5c53-47f3-83b5-30fcc131e144 is in state STARTED 2026-03-13 01:03:21.530239 | orchestrator | 2026-03-13 01:03:21 | INFO  | Task 7b6a4f16-d3f8-4f5c-8d27-c9f6b0982ad9 is in state STARTED 2026-03-13 01:03:21.530281 | orchestrator | 2026-03-13 01:03:21 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:03:24.806669 | orchestrator | 2026-03-13 01:03:24 | INFO  | Task d1b44c6f-8cd1-4de8-81d2-12b995c7c7a0 is in state STARTED 2026-03-13 01:03:24.807365 | orchestrator | 2026-03-13 01:03:24 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:03:24.808187 | orchestrator | 2026-03-13 01:03:24 | INFO  | Task a78b234f-5c53-47f3-83b5-30fcc131e144 is in state STARTED 2026-03-13 01:03:24.808965 | orchestrator | 2026-03-13 01:03:24 | INFO  | Task 7b6a4f16-d3f8-4f5c-8d27-c9f6b0982ad9 is in state STARTED 2026-03-13 01:03:24.809132 | orchestrator | 2026-03-13 01:03:24 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:03:27.834089 | orchestrator | 2026-03-13 01:03:27 | INFO  | Task d1b44c6f-8cd1-4de8-81d2-12b995c7c7a0 is in state STARTED 2026-03-13 01:03:27.834598 | orchestrator | 2026-03-13 01:03:27 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:03:27.835483 | orchestrator | 2026-03-13 01:03:27 | INFO  | Task a78b234f-5c53-47f3-83b5-30fcc131e144 is in state STARTED 2026-03-13 01:03:27.836358 | orchestrator | 2026-03-13 01:03:27 | INFO  | Task 7b6a4f16-d3f8-4f5c-8d27-c9f6b0982ad9 is in state STARTED 2026-03-13 01:03:27.836385 | orchestrator | 2026-03-13 01:03:27 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:03:30.861757 | orchestrator | 2026-03-13 01:03:30 | INFO  | Task d1b44c6f-8cd1-4de8-81d2-12b995c7c7a0 is in state STARTED 2026-03-13 01:03:30.863479 | orchestrator | 2026-03-13 01:03:30 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:03:30.866263 | orchestrator | 2026-03-13 01:03:30 | INFO  | Task a78b234f-5c53-47f3-83b5-30fcc131e144 is in state STARTED 2026-03-13 01:03:30.866326 | orchestrator | 2026-03-13 01:03:30 | INFO  | Task 7b6a4f16-d3f8-4f5c-8d27-c9f6b0982ad9 is in state STARTED 2026-03-13 01:03:30.866336 | orchestrator | 2026-03-13 01:03:30 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:03:33.949627 | orchestrator | 2026-03-13 01:03:33 | INFO  | Task d1b44c6f-8cd1-4de8-81d2-12b995c7c7a0 is in state STARTED 2026-03-13 01:03:33.949682 | orchestrator | 2026-03-13 01:03:33 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:03:33.949690 | orchestrator | 2026-03-13 01:03:33 | INFO  | Task a78b234f-5c53-47f3-83b5-30fcc131e144 is in state STARTED 2026-03-13 01:03:33.949696 | orchestrator | 2026-03-13 01:03:33 | INFO  | Task 7b6a4f16-d3f8-4f5c-8d27-c9f6b0982ad9 is in state STARTED 2026-03-13 01:03:33.949719 | orchestrator | 2026-03-13 01:03:33 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:03:36.951511 | orchestrator | 2026-03-13 01:03:36 | INFO  | Task d1b44c6f-8cd1-4de8-81d2-12b995c7c7a0 is in state STARTED 2026-03-13 01:03:36.952243 | orchestrator | 2026-03-13 01:03:36 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:03:36.952877 | orchestrator | 2026-03-13 01:03:36 | INFO  | Task a78b234f-5c53-47f3-83b5-30fcc131e144 is in state STARTED 2026-03-13 01:03:36.954821 | orchestrator | 2026-03-13 01:03:36 | INFO  | Task 7b6a4f16-d3f8-4f5c-8d27-c9f6b0982ad9 is in state STARTED 2026-03-13 01:03:36.954857 | orchestrator | 2026-03-13 01:03:36 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:03:40.074552 | orchestrator | 2026-03-13 01:03:40 | INFO  | Task d1b44c6f-8cd1-4de8-81d2-12b995c7c7a0 is in state STARTED 2026-03-13 01:03:40.074615 | orchestrator | 2026-03-13 01:03:40 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:03:40.074624 | orchestrator | 2026-03-13 01:03:40 | INFO  | Task a78b234f-5c53-47f3-83b5-30fcc131e144 is in state STARTED 2026-03-13 01:03:40.074630 | orchestrator | 2026-03-13 01:03:40 | INFO  | Task 7b6a4f16-d3f8-4f5c-8d27-c9f6b0982ad9 is in state STARTED 2026-03-13 01:03:40.074635 | orchestrator | 2026-03-13 01:03:40 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:03:43.072549 | orchestrator | 2026-03-13 01:03:43 | INFO  | Task d1b44c6f-8cd1-4de8-81d2-12b995c7c7a0 is in state STARTED 2026-03-13 01:03:43.072597 | orchestrator | 2026-03-13 01:03:43 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state STARTED 2026-03-13 01:03:43.072605 | orchestrator | 2026-03-13 01:03:43 | INFO  | Task a78b234f-5c53-47f3-83b5-30fcc131e144 is in state STARTED 2026-03-13 01:03:43.072611 | orchestrator | 2026-03-13 01:03:43 | INFO  | Task 7b6a4f16-d3f8-4f5c-8d27-c9f6b0982ad9 is in state STARTED 2026-03-13 01:03:43.072617 | orchestrator | 2026-03-13 01:03:43 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:03:46.103651 | orchestrator | 2026-03-13 01:03:46 | INFO  | Task d1b44c6f-8cd1-4de8-81d2-12b995c7c7a0 is in state STARTED 2026-03-13 01:03:46.104900 | orchestrator | 2026-03-13 01:03:46 | INFO  | Task cfa15fa7-1b6b-4a9b-9423-f46683b59c87 is in state SUCCESS 2026-03-13 01:03:46.107329 | orchestrator | 2026-03-13 01:03:46.107378 | orchestrator | 2026-03-13 01:03:46.107386 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-13 01:03:46.107393 | orchestrator | 2026-03-13 01:03:46.107399 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-13 01:03:46.107405 | orchestrator | Friday 13 March 2026 01:03:05 +0000 (0:00:00.172) 0:00:00.172 ********** 2026-03-13 01:03:46.107411 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:03:46.107418 | orchestrator | ok: [testbed-node-1] 2026-03-13 01:03:46.107424 | orchestrator | ok: [testbed-node-2] 2026-03-13 01:03:46.107429 | orchestrator | 2026-03-13 01:03:46.107446 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-13 01:03:46.107452 | orchestrator | Friday 13 March 2026 01:03:05 +0000 (0:00:00.281) 0:00:00.453 ********** 2026-03-13 01:03:46.107458 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-13 01:03:46.107463 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-13 01:03:46.107602 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-13 01:03:46.107617 | orchestrator | 2026-03-13 01:03:46.107626 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-03-13 01:03:46.107669 | orchestrator | 2026-03-13 01:03:46.107679 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-03-13 01:03:46.107707 | orchestrator | Friday 13 March 2026 01:03:05 +0000 (0:00:00.567) 0:00:01.021 ********** 2026-03-13 01:03:46.107716 | orchestrator | ok: [testbed-node-1] 2026-03-13 01:03:46.107725 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:03:46.107733 | orchestrator | ok: [testbed-node-2] 2026-03-13 01:03:46.108129 | orchestrator | 2026-03-13 01:03:46.108137 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 01:03:46.108141 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 01:03:46.108147 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 01:03:46.108151 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 01:03:46.108155 | orchestrator | 2026-03-13 01:03:46.108158 | orchestrator | 2026-03-13 01:03:46.108163 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 01:03:46.108167 | orchestrator | Friday 13 March 2026 01:03:06 +0000 (0:00:00.640) 0:00:01.661 ********** 2026-03-13 01:03:46.108170 | orchestrator | =============================================================================== 2026-03-13 01:03:46.108174 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.64s 2026-03-13 01:03:46.108178 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.57s 2026-03-13 01:03:46.108181 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.28s 2026-03-13 01:03:46.108185 | orchestrator | 2026-03-13 01:03:46.108189 | orchestrator | 2026-03-13 01:03:46.108193 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-03-13 01:03:46.108196 | orchestrator | 2026-03-13 01:03:46.108200 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-03-13 01:03:46.108204 | orchestrator | Friday 13 March 2026 00:59:51 +0000 (0:00:00.093) 0:00:00.093 ********** 2026-03-13 01:03:46.108208 | orchestrator | changed: [localhost] 2026-03-13 01:03:46.108211 | orchestrator | 2026-03-13 01:03:46.108215 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-03-13 01:03:46.108219 | orchestrator | Friday 13 March 2026 00:59:52 +0000 (0:00:01.039) 0:00:01.133 ********** 2026-03-13 01:03:46.108223 | orchestrator | 2026-03-13 01:03:46.108226 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-13 01:03:46.108230 | orchestrator | 2026-03-13 01:03:46.108234 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-13 01:03:46.108238 | orchestrator | changed: [localhost] 2026-03-13 01:03:46.108244 | orchestrator | 2026-03-13 01:03:46.108250 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-03-13 01:03:46.108259 | orchestrator | Friday 13 March 2026 01:02:59 +0000 (0:03:06.422) 0:03:07.555 ********** 2026-03-13 01:03:46.108266 | orchestrator | changed: [localhost] 2026-03-13 01:03:46.108273 | orchestrator | 2026-03-13 01:03:46.108278 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-13 01:03:46.108284 | orchestrator | 2026-03-13 01:03:46.108290 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-13 01:03:46.108296 | orchestrator | Friday 13 March 2026 01:03:12 +0000 (0:00:13.513) 0:03:21.068 ********** 2026-03-13 01:03:46.108301 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:03:46.108305 | orchestrator | ok: [testbed-node-1] 2026-03-13 01:03:46.108309 | orchestrator | ok: [testbed-node-2] 2026-03-13 01:03:46.108312 | orchestrator | 2026-03-13 01:03:46.108316 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-13 01:03:46.108320 | orchestrator | Friday 13 March 2026 01:03:13 +0000 (0:00:00.286) 0:03:21.355 ********** 2026-03-13 01:03:46.108324 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-03-13 01:03:46.108328 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-03-13 01:03:46.108332 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-03-13 01:03:46.108343 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-03-13 01:03:46.108347 | orchestrator | 2026-03-13 01:03:46.108351 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-03-13 01:03:46.108354 | orchestrator | skipping: no hosts matched 2026-03-13 01:03:46.108359 | orchestrator | 2026-03-13 01:03:46.108362 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 01:03:46.108379 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 01:03:46.108405 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 01:03:46.108410 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 01:03:46.108413 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 01:03:46.108417 | orchestrator | 2026-03-13 01:03:46.108421 | orchestrator | 2026-03-13 01:03:46.108429 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 01:03:46.108433 | orchestrator | Friday 13 March 2026 01:03:13 +0000 (0:00:00.499) 0:03:21.854 ********** 2026-03-13 01:03:46.108437 | orchestrator | =============================================================================== 2026-03-13 01:03:46.108441 | orchestrator | Download ironic-agent initramfs --------------------------------------- 186.42s 2026-03-13 01:03:46.108444 | orchestrator | Download ironic-agent kernel ------------------------------------------- 13.51s 2026-03-13 01:03:46.108448 | orchestrator | Ensure the destination directory exists --------------------------------- 1.04s 2026-03-13 01:03:46.108452 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.50s 2026-03-13 01:03:46.108455 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2026-03-13 01:03:46.108459 | orchestrator | 2026-03-13 01:03:46.108463 | orchestrator | 2026-03-13 01:03:46.108466 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-13 01:03:46.108470 | orchestrator | 2026-03-13 01:03:46.108474 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-13 01:03:46.108477 | orchestrator | Friday 13 March 2026 00:59:52 +0000 (0:00:00.267) 0:00:00.267 ********** 2026-03-13 01:03:46.108481 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:03:46.108485 | orchestrator | ok: [testbed-node-1] 2026-03-13 01:03:46.108488 | orchestrator | ok: [testbed-node-2] 2026-03-13 01:03:46.108492 | orchestrator | ok: [testbed-node-3] 2026-03-13 01:03:46.108496 | orchestrator | ok: [testbed-node-4] 2026-03-13 01:03:46.108500 | orchestrator | ok: [testbed-node-5] 2026-03-13 01:03:46.108503 | orchestrator | 2026-03-13 01:03:46.108507 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-13 01:03:46.108511 | orchestrator | Friday 13 March 2026 00:59:53 +0000 (0:00:00.689) 0:00:00.956 ********** 2026-03-13 01:03:46.108515 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-03-13 01:03:46.108518 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-03-13 01:03:46.108522 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-03-13 01:03:46.108526 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-03-13 01:03:46.108529 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-03-13 01:03:46.108533 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-03-13 01:03:46.108537 | orchestrator | 2026-03-13 01:03:46.108540 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-03-13 01:03:46.108544 | orchestrator | 2026-03-13 01:03:46.108548 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-13 01:03:46.108552 | orchestrator | Friday 13 March 2026 00:59:53 +0000 (0:00:00.596) 0:00:01.552 ********** 2026-03-13 01:03:46.108559 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-13 01:03:46.108562 | orchestrator | 2026-03-13 01:03:46.108566 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-03-13 01:03:46.108570 | orchestrator | Friday 13 March 2026 00:59:54 +0000 (0:00:00.889) 0:00:02.442 ********** 2026-03-13 01:03:46.108575 | orchestrator | ok: [testbed-node-1] 2026-03-13 01:03:46.108581 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:03:46.108587 | orchestrator | ok: [testbed-node-2] 2026-03-13 01:03:46.108596 | orchestrator | ok: [testbed-node-3] 2026-03-13 01:03:46.108604 | orchestrator | ok: [testbed-node-4] 2026-03-13 01:03:46.108610 | orchestrator | ok: [testbed-node-5] 2026-03-13 01:03:46.108616 | orchestrator | 2026-03-13 01:03:46.108623 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-03-13 01:03:46.108628 | orchestrator | Friday 13 March 2026 00:59:55 +0000 (0:00:01.247) 0:00:03.689 ********** 2026-03-13 01:03:46.108635 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:03:46.108641 | orchestrator | ok: [testbed-node-1] 2026-03-13 01:03:46.108647 | orchestrator | ok: [testbed-node-2] 2026-03-13 01:03:46.108687 | orchestrator | ok: [testbed-node-3] 2026-03-13 01:03:46.108693 | orchestrator | ok: [testbed-node-4] 2026-03-13 01:03:46.108699 | orchestrator | ok: [testbed-node-5] 2026-03-13 01:03:46.108705 | orchestrator | 2026-03-13 01:03:46.108711 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-03-13 01:03:46.108715 | orchestrator | Friday 13 March 2026 00:59:56 +0000 (0:00:01.113) 0:00:04.802 ********** 2026-03-13 01:03:46.108719 | orchestrator | ok: [testbed-node-0] => { 2026-03-13 01:03:46.108723 | orchestrator |  "changed": false, 2026-03-13 01:03:46.108727 | orchestrator |  "msg": "All assertions passed" 2026-03-13 01:03:46.108731 | orchestrator | } 2026-03-13 01:03:46.108735 | orchestrator | ok: [testbed-node-1] => { 2026-03-13 01:03:46.108738 | orchestrator |  "changed": false, 2026-03-13 01:03:46.108742 | orchestrator |  "msg": "All assertions passed" 2026-03-13 01:03:46.108746 | orchestrator | } 2026-03-13 01:03:46.108749 | orchestrator | ok: [testbed-node-2] => { 2026-03-13 01:03:46.108753 | orchestrator |  "changed": false, 2026-03-13 01:03:46.108757 | orchestrator |  "msg": "All assertions passed" 2026-03-13 01:03:46.108761 | orchestrator | } 2026-03-13 01:03:46.108764 | orchestrator | ok: [testbed-node-3] => { 2026-03-13 01:03:46.108768 | orchestrator |  "changed": false, 2026-03-13 01:03:46.108772 | orchestrator |  "msg": "All assertions passed" 2026-03-13 01:03:46.108776 | orchestrator | } 2026-03-13 01:03:46.108779 | orchestrator | ok: [testbed-node-4] => { 2026-03-13 01:03:46.108783 | orchestrator |  "changed": false, 2026-03-13 01:03:46.108787 | orchestrator |  "msg": "All assertions passed" 2026-03-13 01:03:46.108791 | orchestrator | } 2026-03-13 01:03:46.108794 | orchestrator | ok: [testbed-node-5] => { 2026-03-13 01:03:46.108798 | orchestrator |  "changed": false, 2026-03-13 01:03:46.108802 | orchestrator |  "msg": "All assertions passed" 2026-03-13 01:03:46.108805 | orchestrator | } 2026-03-13 01:03:46.108809 | orchestrator | 2026-03-13 01:03:46.108813 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-03-13 01:03:46.108830 | orchestrator | Friday 13 March 2026 00:59:57 +0000 (0:00:00.653) 0:00:05.456 ********** 2026-03-13 01:03:46.108834 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:03:46.108838 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:03:46.108842 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:03:46.108846 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:03:46.108850 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:03:46.108853 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:03:46.108857 | orchestrator | 2026-03-13 01:03:46.108864 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-03-13 01:03:46.108868 | orchestrator | Friday 13 March 2026 00:59:58 +0000 (0:00:00.550) 0:00:06.006 ********** 2026-03-13 01:03:46.108871 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-03-13 01:03:46.108879 | orchestrator | 2026-03-13 01:03:46.108883 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-03-13 01:03:46.108886 | orchestrator | Friday 13 March 2026 01:00:01 +0000 (0:00:03.332) 0:00:09.338 ********** 2026-03-13 01:03:46.108890 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-03-13 01:03:46.108894 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-03-13 01:03:46.108898 | orchestrator | 2026-03-13 01:03:46.108902 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-03-13 01:03:46.108905 | orchestrator | Friday 13 March 2026 01:00:07 +0000 (0:00:06.407) 0:00:15.745 ********** 2026-03-13 01:03:46.108909 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-13 01:03:46.108913 | orchestrator | 2026-03-13 01:03:46.108917 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-03-13 01:03:46.108921 | orchestrator | Friday 13 March 2026 01:00:10 +0000 (0:00:03.112) 0:00:18.858 ********** 2026-03-13 01:03:46.108924 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-03-13 01:03:46.108928 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-13 01:03:46.108932 | orchestrator | 2026-03-13 01:03:46.108936 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-03-13 01:03:46.108939 | orchestrator | Friday 13 March 2026 01:00:14 +0000 (0:00:03.684) 0:00:22.542 ********** 2026-03-13 01:03:46.108943 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-13 01:03:46.108947 | orchestrator | 2026-03-13 01:03:46.108951 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-03-13 01:03:46.108955 | orchestrator | Friday 13 March 2026 01:00:17 +0000 (0:00:03.269) 0:00:25.812 ********** 2026-03-13 01:03:46.108958 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-03-13 01:03:46.108962 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-03-13 01:03:46.108966 | orchestrator | 2026-03-13 01:03:46.108970 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-13 01:03:46.108973 | orchestrator | Friday 13 March 2026 01:00:24 +0000 (0:00:06.981) 0:00:32.794 ********** 2026-03-13 01:03:46.108977 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:03:46.109025 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:03:46.109030 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:03:46.109034 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:03:46.109038 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:03:46.109042 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:03:46.109049 | orchestrator | 2026-03-13 01:03:46.109054 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-03-13 01:03:46.109064 | orchestrator | Friday 13 March 2026 01:00:25 +0000 (0:00:00.682) 0:00:33.476 ********** 2026-03-13 01:03:46.109071 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:03:46.109078 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:03:46.109084 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:03:46.109090 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:03:46.109097 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:03:46.109103 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:03:46.109110 | orchestrator | 2026-03-13 01:03:46.109117 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-03-13 01:03:46.109123 | orchestrator | Friday 13 March 2026 01:00:28 +0000 (0:00:02.724) 0:00:36.201 ********** 2026-03-13 01:03:46.109130 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:03:46.109136 | orchestrator | ok: [testbed-node-1] 2026-03-13 01:03:46.109143 | orchestrator | ok: [testbed-node-2] 2026-03-13 01:03:46.109149 | orchestrator | ok: [testbed-node-3] 2026-03-13 01:03:46.109156 | orchestrator | ok: [testbed-node-4] 2026-03-13 01:03:46.109162 | orchestrator | ok: [testbed-node-5] 2026-03-13 01:03:46.109169 | orchestrator | 2026-03-13 01:03:46.109181 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-13 01:03:46.109188 | orchestrator | Friday 13 March 2026 01:00:29 +0000 (0:00:01.192) 0:00:37.393 ********** 2026-03-13 01:03:46.109195 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:03:46.109202 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:03:46.109208 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:03:46.109215 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:03:46.109221 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:03:46.109228 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:03:46.109234 | orchestrator | 2026-03-13 01:03:46.109240 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-03-13 01:03:46.109247 | orchestrator | Friday 13 March 2026 01:00:33 +0000 (0:00:03.611) 0:00:41.004 ********** 2026-03-13 01:03:46.109282 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-13 01:03:46.109295 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-13 01:03:46.109303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-13 01:03:46.109311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-13 01:03:46.109321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-13 01:03:46.109345 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-13 01:03:46.109352 | orchestrator | 2026-03-13 01:03:46.109362 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-03-13 01:03:46.109370 | orchestrator | Friday 13 March 2026 01:00:36 +0000 (0:00:03.319) 0:00:44.324 ********** 2026-03-13 01:03:46.109377 | orchestrator | [WARNING]: Skipped 2026-03-13 01:03:46.109384 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-03-13 01:03:46.109390 | orchestrator | due to this access issue: 2026-03-13 01:03:46.109398 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-03-13 01:03:46.109404 | orchestrator | a directory 2026-03-13 01:03:46.109411 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-13 01:03:46.109417 | orchestrator | 2026-03-13 01:03:46.109424 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-13 01:03:46.109432 | orchestrator | Friday 13 March 2026 01:00:37 +0000 (0:00:01.084) 0:00:45.408 ********** 2026-03-13 01:03:46.109439 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-13 01:03:46.109446 | orchestrator | 2026-03-13 01:03:46.109453 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-03-13 01:03:46.109459 | orchestrator | Friday 13 March 2026 01:00:38 +0000 (0:00:01.135) 0:00:46.544 ********** 2026-03-13 01:03:46.109466 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-13 01:03:46.109474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-13 01:03:46.109486 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-13 01:03:46.109517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-13 01:03:46.109525 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-13 01:03:46.109532 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-13 01:03:46.109543 | orchestrator | 2026-03-13 01:03:46.109550 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-03-13 01:03:46.109608 | orchestrator | Friday 13 March 2026 01:00:42 +0000 (0:00:04.030) 0:00:50.575 ********** 2026-03-13 01:03:46.109619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-13 01:03:46.109625 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:03:46.109632 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-13 01:03:46.109639 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:03:46.109668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-13 01:03:46.109677 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:03:46.109683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-13 01:03:46.109691 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:03:46.109696 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-13 01:03:46.109701 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:03:46.109706 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-13 01:03:46.109711 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:03:46.109716 | orchestrator | 2026-03-13 01:03:46.109720 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-03-13 01:03:46.109725 | orchestrator | Friday 13 March 2026 01:00:45 +0000 (0:00:03.089) 0:00:53.665 ********** 2026-03-13 01:03:46.109742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-13 01:03:46.109748 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:03:46.109756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-13 01:03:46.109760 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:03:46.109764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-13 01:03:46.109770 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:03:46.109774 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-13 01:03:46.109778 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:03:46.109782 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-13 01:03:46.109786 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:03:46.109795 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-13 01:03:46.109799 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:03:46.109803 | orchestrator | 2026-03-13 01:03:46.109807 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-03-13 01:03:46.109811 | orchestrator | Friday 13 March 2026 01:00:48 +0000 (0:00:02.913) 0:00:56.578 ********** 2026-03-13 01:03:46.109815 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:03:46.109818 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:03:46.109822 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:03:46.109826 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:03:46.109830 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:03:46.109833 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:03:46.109839 | orchestrator | 2026-03-13 01:03:46.109844 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-03-13 01:03:46.109847 | orchestrator | Friday 13 March 2026 01:00:51 +0000 (0:00:02.768) 0:00:59.347 ********** 2026-03-13 01:03:46.109851 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:03:46.109855 | orchestrator | 2026-03-13 01:03:46.109859 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-03-13 01:03:46.109863 | orchestrator | Friday 13 March 2026 01:00:51 +0000 (0:00:00.100) 0:00:59.448 ********** 2026-03-13 01:03:46.109866 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:03:46.109870 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:03:46.109874 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:03:46.109878 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:03:46.109881 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:03:46.109885 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:03:46.109889 | orchestrator | 2026-03-13 01:03:46.109893 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-03-13 01:03:46.109897 | orchestrator | Friday 13 March 2026 01:00:52 +0000 (0:00:00.750) 0:01:00.198 ********** 2026-03-13 01:03:46.109902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-13 01:03:46.109909 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:03:46.109916 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-13 01:03:46.109927 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:03:46.109933 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-13 01:03:46.109940 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:03:46.110008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-13 01:03:46.110059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-13 01:03:46.110066 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:03:46.110072 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:03:46.110079 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-13 01:03:46.110085 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:03:46.110092 | orchestrator | 2026-03-13 01:03:46.110098 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-03-13 01:03:46.110103 | orchestrator | Friday 13 March 2026 01:00:54 +0000 (0:00:02.454) 0:01:02.652 ********** 2026-03-13 01:03:46.110107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-13 01:03:46.110117 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-13 01:03:46.110127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-13 01:03:46.110131 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-13 01:03:46.110135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-13 01:03:46.110140 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-13 01:03:46.110144 | orchestrator | 2026-03-13 01:03:46.110147 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-03-13 01:03:46.110151 | orchestrator | Friday 13 March 2026 01:00:58 +0000 (0:00:03.816) 0:01:06.468 ********** 2026-03-13 01:03:46.110164 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-13 01:03:46.110169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-13 01:03:46.110173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-13 01:03:46.110177 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-13 01:03:46.110181 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-13 01:03:46.110193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-13 01:03:46.110197 | orchestrator | 2026-03-13 01:03:46.110201 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-03-13 01:03:46.110205 | orchestrator | Friday 13 March 2026 01:01:04 +0000 (0:00:06.250) 0:01:12.719 ********** 2026-03-13 01:03:46.110209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-13 01:03:46.110213 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:03:46.110217 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-13 01:03:46.110221 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:03:46.110225 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-13 01:03:46.110232 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:03:46.110239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-13 01:03:46.110310 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:03:46.110331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-13 01:03:46.110340 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:03:46.110348 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-13 01:03:46.110420 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:03:46.110428 | orchestrator | 2026-03-13 01:03:46.110434 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-03-13 01:03:46.110440 | orchestrator | Friday 13 March 2026 01:01:07 +0000 (0:00:02.392) 0:01:15.112 ********** 2026-03-13 01:03:46.110446 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:03:46.110452 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:03:46.110457 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:03:46.110463 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:03:46.110469 | orchestrator | changed: [testbed-node-1] 2026-03-13 01:03:46.110475 | orchestrator | changed: [testbed-node-2] 2026-03-13 01:03:46.110481 | orchestrator | 2026-03-13 01:03:46.110488 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-03-13 01:03:46.110494 | orchestrator | Friday 13 March 2026 01:01:09 +0000 (0:00:02.615) 0:01:17.728 ********** 2026-03-13 01:03:46.110502 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-13 01:03:46.110517 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:03:46.110526 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-13 01:03:46.110533 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:03:46.110550 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-13 01:03:46.110558 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:03:46.110565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-13 01:03:46.110572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-13 01:03:46.110579 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-13 01:03:46.110591 | orchestrator | 2026-03-13 01:03:46.110597 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-03-13 01:03:46.110603 | orchestrator | Friday 13 March 2026 01:01:13 +0000 (0:00:03.503) 0:01:21.231 ********** 2026-03-13 01:03:46.110607 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:03:46.110611 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:03:46.110616 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:03:46.110623 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:03:46.110629 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:03:46.110635 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:03:46.110641 | orchestrator | 2026-03-13 01:03:46.110648 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-03-13 01:03:46.110654 | orchestrator | Friday 13 March 2026 01:01:15 +0000 (0:00:01.855) 0:01:23.087 ********** 2026-03-13 01:03:46.110661 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:03:46.110667 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:03:46.110673 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:03:46.110680 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:03:46.110686 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:03:46.110693 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:03:46.110699 | orchestrator | 2026-03-13 01:03:46.110703 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-03-13 01:03:46.110711 | orchestrator | Friday 13 March 2026 01:01:17 +0000 (0:00:01.916) 0:01:25.003 ********** 2026-03-13 01:03:46.110718 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:03:46.110724 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:03:46.110730 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:03:46.110737 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:03:46.110743 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:03:46.110750 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:03:46.110756 | orchestrator | 2026-03-13 01:03:46.110763 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-03-13 01:03:46.110772 | orchestrator | Friday 13 March 2026 01:01:19 +0000 (0:00:01.977) 0:01:26.981 ********** 2026-03-13 01:03:46.110779 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:03:46.110785 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:03:46.110792 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:03:46.110798 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:03:46.110805 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:03:46.110812 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:03:46.110818 | orchestrator | 2026-03-13 01:03:46.110824 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-03-13 01:03:46.110832 | orchestrator | Friday 13 March 2026 01:01:21 +0000 (0:00:02.587) 0:01:29.568 ********** 2026-03-13 01:03:46.110838 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:03:46.110845 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:03:46.110851 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:03:46.110857 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:03:46.110863 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:03:46.110870 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:03:46.110876 | orchestrator | 2026-03-13 01:03:46.110882 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-03-13 01:03:46.110889 | orchestrator | Friday 13 March 2026 01:01:23 +0000 (0:00:01.717) 0:01:31.286 ********** 2026-03-13 01:03:46.110899 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:03:46.110906 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:03:46.110912 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:03:46.110919 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:03:46.110925 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:03:46.110932 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:03:46.110941 | orchestrator | 2026-03-13 01:03:46.110948 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-03-13 01:03:46.110955 | orchestrator | Friday 13 March 2026 01:01:25 +0000 (0:00:01.700) 0:01:32.987 ********** 2026-03-13 01:03:46.110961 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-13 01:03:46.110967 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:03:46.110972 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-13 01:03:46.110978 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:03:46.111016 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-13 01:03:46.111023 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:03:46.111029 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-13 01:03:46.111034 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:03:46.111040 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-13 01:03:46.111046 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:03:46.111052 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-13 01:03:46.111058 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:03:46.111064 | orchestrator | 2026-03-13 01:03:46.111070 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-03-13 01:03:46.111076 | orchestrator | Friday 13 March 2026 01:01:26 +0000 (0:00:01.764) 0:01:34.751 ********** 2026-03-13 01:03:46.111084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-13 01:03:46.111091 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:03:46.111108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-13 01:03:46.111116 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:03:46.111133 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-13 01:03:46.111139 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:03:46.111145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-13 01:03:46.111151 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:03:46.111158 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-13 01:03:46.111165 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:03:46.111172 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-13 01:03:46.111179 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:03:46.111185 | orchestrator | 2026-03-13 01:03:46.111191 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-03-13 01:03:46.111197 | orchestrator | Friday 13 March 2026 01:01:28 +0000 (0:00:01.952) 0:01:36.704 ********** 2026-03-13 01:03:46.111211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-13 01:03:46.111221 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:03:46.111228 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-13 01:03:46.111236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-13 01:03:46.111242 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:03:46.111249 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:03:46.111255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-13 01:03:46.111262 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-13 01:03:46.111272 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:03:46.111279 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:03:46.111293 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-13 01:03:46.111301 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:03:46.111308 | orchestrator | 2026-03-13 01:03:46.111315 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-03-13 01:03:46.111321 | orchestrator | Friday 13 March 2026 01:01:30 +0000 (0:00:01.922) 0:01:38.626 ********** 2026-03-13 01:03:46.111327 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:03:46.111333 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:03:46.111340 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:03:46.111347 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:03:46.111353 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:03:46.111360 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:03:46.111367 | orchestrator | 2026-03-13 01:03:46.111374 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-03-13 01:03:46.111380 | orchestrator | Friday 13 March 2026 01:01:33 +0000 (0:00:02.369) 0:01:40.996 ********** 2026-03-13 01:03:46.111387 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:03:46.111394 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:03:46.111400 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:03:46.111407 | orchestrator | changed: [testbed-node-4] 2026-03-13 01:03:46.111413 | orchestrator | changed: [testbed-node-3] 2026-03-13 01:03:46.111419 | orchestrator | changed: [testbed-node-5] 2026-03-13 01:03:46.111424 | orchestrator | 2026-03-13 01:03:46.111430 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-03-13 01:03:46.111436 | orchestrator | Friday 13 March 2026 01:01:36 +0000 (0:00:03.092) 0:01:44.088 ********** 2026-03-13 01:03:46.111442 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:03:46.111447 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:03:46.111452 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:03:46.111459 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:03:46.111466 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:03:46.111472 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:03:46.111479 | orchestrator | 2026-03-13 01:03:46.111486 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-03-13 01:03:46.111493 | orchestrator | Friday 13 March 2026 01:01:38 +0000 (0:00:02.556) 0:01:46.645 ********** 2026-03-13 01:03:46.111499 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:03:46.111505 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:03:46.111511 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:03:46.111518 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:03:46.111525 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:03:46.111531 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:03:46.111538 | orchestrator | 2026-03-13 01:03:46.111544 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-03-13 01:03:46.111550 | orchestrator | Friday 13 March 2026 01:01:40 +0000 (0:00:01.735) 0:01:48.380 ********** 2026-03-13 01:03:46.111556 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:03:46.111562 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:03:46.111569 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:03:46.111582 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:03:46.111586 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:03:46.111590 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:03:46.111594 | orchestrator | 2026-03-13 01:03:46.111598 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-03-13 01:03:46.111602 | orchestrator | Friday 13 March 2026 01:01:42 +0000 (0:00:02.431) 0:01:50.812 ********** 2026-03-13 01:03:46.111606 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:03:46.111613 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:03:46.111619 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:03:46.111625 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:03:46.111631 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:03:46.111637 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:03:46.111644 | orchestrator | 2026-03-13 01:03:46.111651 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-03-13 01:03:46.111657 | orchestrator | Friday 13 March 2026 01:01:45 +0000 (0:00:02.789) 0:01:53.601 ********** 2026-03-13 01:03:46.111664 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:03:46.111670 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:03:46.111677 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:03:46.111682 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:03:46.111686 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:03:46.111690 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:03:46.111695 | orchestrator | 2026-03-13 01:03:46.111702 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-03-13 01:03:46.111708 | orchestrator | Friday 13 March 2026 01:01:47 +0000 (0:00:01.776) 0:01:55.378 ********** 2026-03-13 01:03:46.111714 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:03:46.111720 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:03:46.111727 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:03:46.111733 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:03:46.111739 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:03:46.111745 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:03:46.111751 | orchestrator | 2026-03-13 01:03:46.111757 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-03-13 01:03:46.111763 | orchestrator | Friday 13 March 2026 01:01:49 +0000 (0:00:01.646) 0:01:57.024 ********** 2026-03-13 01:03:46.111769 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:03:46.111776 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:03:46.111788 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:03:46.111795 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:03:46.111801 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:03:46.111808 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:03:46.111814 | orchestrator | 2026-03-13 01:03:46.111820 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-03-13 01:03:46.111827 | orchestrator | Friday 13 March 2026 01:01:50 +0000 (0:00:01.736) 0:01:58.761 ********** 2026-03-13 01:03:46.111837 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-13 01:03:46.111846 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:03:46.111852 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-13 01:03:46.111859 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:03:46.111865 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-13 01:03:46.111871 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:03:46.111877 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-13 01:03:46.111884 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:03:46.111890 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-13 01:03:46.111896 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:03:46.111907 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-13 01:03:46.111913 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:03:46.111920 | orchestrator | 2026-03-13 01:03:46.111925 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-03-13 01:03:46.111931 | orchestrator | Friday 13 March 2026 01:01:52 +0000 (0:00:01.903) 0:02:00.664 ********** 2026-03-13 01:03:46.111939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-13 01:03:46.111946 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:03:46.111952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-13 01:03:46.111960 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:03:46.111967 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-13 01:03:46.111974 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:03:46.112011 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-13 01:03:46.112021 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:03:46.112029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-13 01:03:46.112033 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:03:46.112037 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-13 01:03:46.112043 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:03:46.112050 | orchestrator | 2026-03-13 01:03:46.112056 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-03-13 01:03:46.112062 | orchestrator | Friday 13 March 2026 01:01:54 +0000 (0:00:01.930) 0:02:02.595 ********** 2026-03-13 01:03:46.112069 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-13 01:03:46.112080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-13 01:03:46.112090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-13 01:03:46.112102 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-13 01:03:46.112109 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-13 01:03:46.112116 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-13 01:03:46.112123 | orchestrator | 2026-03-13 01:03:46.112129 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-13 01:03:46.112136 | orchestrator | Friday 13 March 2026 01:01:58 +0000 (0:00:03.527) 0:02:06.123 ********** 2026-03-13 01:03:46.112143 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:03:46.112149 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:03:46.112156 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:03:46.112162 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:03:46.112168 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:03:46.112174 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:03:46.112180 | orchestrator | 2026-03-13 01:03:46.112187 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-03-13 01:03:46.112193 | orchestrator | Friday 13 March 2026 01:01:58 +0000 (0:00:00.524) 0:02:06.647 ********** 2026-03-13 01:03:46.112199 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:03:46.112205 | orchestrator | 2026-03-13 01:03:46.112215 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-03-13 01:03:46.112226 | orchestrator | Friday 13 March 2026 01:02:01 +0000 (0:00:02.592) 0:02:09.240 ********** 2026-03-13 01:03:46.112233 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:03:46.112239 | orchestrator | 2026-03-13 01:03:46.112245 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-03-13 01:03:46.112252 | orchestrator | Friday 13 March 2026 01:02:03 +0000 (0:00:02.110) 0:02:11.350 ********** 2026-03-13 01:03:46.112258 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:03:46.112265 | orchestrator | 2026-03-13 01:03:46.112271 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-13 01:03:46.112281 | orchestrator | Friday 13 March 2026 01:02:40 +0000 (0:00:37.272) 0:02:48.622 ********** 2026-03-13 01:03:46.112287 | orchestrator | 2026-03-13 01:03:46.112293 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-13 01:03:46.112300 | orchestrator | Friday 13 March 2026 01:02:40 +0000 (0:00:00.062) 0:02:48.684 ********** 2026-03-13 01:03:46.112306 | orchestrator | 2026-03-13 01:03:46.112312 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-13 01:03:46.112318 | orchestrator | Friday 13 March 2026 01:02:40 +0000 (0:00:00.219) 0:02:48.904 ********** 2026-03-13 01:03:46.112324 | orchestrator | 2026-03-13 01:03:46.112330 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-13 01:03:46.112336 | orchestrator | Friday 13 March 2026 01:02:41 +0000 (0:00:00.059) 0:02:48.964 ********** 2026-03-13 01:03:46.112342 | orchestrator | 2026-03-13 01:03:46.112348 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-13 01:03:46.112354 | orchestrator | Friday 13 March 2026 01:02:41 +0000 (0:00:00.059) 0:02:49.024 ********** 2026-03-13 01:03:46.112360 | orchestrator | 2026-03-13 01:03:46.112366 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-13 01:03:46.112373 | orchestrator | Friday 13 March 2026 01:02:41 +0000 (0:00:00.059) 0:02:49.084 ********** 2026-03-13 01:03:46.112379 | orchestrator | 2026-03-13 01:03:46.112385 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-03-13 01:03:46.112392 | orchestrator | Friday 13 March 2026 01:02:41 +0000 (0:00:00.061) 0:02:49.145 ********** 2026-03-13 01:03:46.112398 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:03:46.112404 | orchestrator | changed: [testbed-node-1] 2026-03-13 01:03:46.112411 | orchestrator | changed: [testbed-node-2] 2026-03-13 01:03:46.112417 | orchestrator | 2026-03-13 01:03:46.112424 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-03-13 01:03:46.112430 | orchestrator | Friday 13 March 2026 01:02:59 +0000 (0:00:18.598) 0:03:07.744 ********** 2026-03-13 01:03:46.112437 | orchestrator | changed: [testbed-node-5] 2026-03-13 01:03:46.112443 | orchestrator | changed: [testbed-node-4] 2026-03-13 01:03:46.112449 | orchestrator | changed: [testbed-node-3] 2026-03-13 01:03:46.112456 | orchestrator | 2026-03-13 01:03:46.112462 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 01:03:46.112469 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-13 01:03:46.112476 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-13 01:03:46.112483 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-13 01:03:46.112489 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-13 01:03:46.112495 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-13 01:03:46.112507 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-13 01:03:46.112514 | orchestrator | 2026-03-13 01:03:46.112520 | orchestrator | 2026-03-13 01:03:46.112526 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 01:03:46.112533 | orchestrator | Friday 13 March 2026 01:03:44 +0000 (0:00:44.302) 0:03:52.046 ********** 2026-03-13 01:03:46.112539 | orchestrator | =============================================================================== 2026-03-13 01:03:46.112545 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 44.30s 2026-03-13 01:03:46.112551 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 37.27s 2026-03-13 01:03:46.112558 | orchestrator | neutron : Restart neutron-server container ----------------------------- 18.60s 2026-03-13 01:03:46.112564 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 6.98s 2026-03-13 01:03:46.112570 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.41s 2026-03-13 01:03:46.112576 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 6.25s 2026-03-13 01:03:46.112582 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 4.03s 2026-03-13 01:03:46.112589 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.82s 2026-03-13 01:03:46.112595 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.68s 2026-03-13 01:03:46.112601 | orchestrator | Setting sysctl values --------------------------------------------------- 3.61s 2026-03-13 01:03:46.112608 | orchestrator | neutron : Check neutron containers -------------------------------------- 3.53s 2026-03-13 01:03:46.112614 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.50s 2026-03-13 01:03:46.112625 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.33s 2026-03-13 01:03:46.112631 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 3.32s 2026-03-13 01:03:46.112638 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.27s 2026-03-13 01:03:46.112644 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.11s 2026-03-13 01:03:46.112651 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.09s 2026-03-13 01:03:46.112660 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 3.09s 2026-03-13 01:03:46.112667 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 2.91s 2026-03-13 01:03:46.112673 | orchestrator | neutron : Copying over ovn_agent.ini ------------------------------------ 2.79s 2026-03-13 01:03:46.112679 | orchestrator | 2026-03-13 01:03:46 | INFO  | Task bcd8c5af-ddf8-4abd-abe0-4cc27c79d7d7 is in state STARTED 2026-03-13 01:03:46.112686 | orchestrator | 2026-03-13 01:03:46 | INFO  | Task a78b234f-5c53-47f3-83b5-30fcc131e144 is in state STARTED 2026-03-13 01:03:46.112692 | orchestrator | 2026-03-13 01:03:46 | INFO  | Task 7b6a4f16-d3f8-4f5c-8d27-c9f6b0982ad9 is in state STARTED 2026-03-13 01:03:46.112698 | orchestrator | 2026-03-13 01:03:46 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:03:49.141874 | orchestrator | 2026-03-13 01:03:49 | INFO  | Task d1b44c6f-8cd1-4de8-81d2-12b995c7c7a0 is in state STARTED 2026-03-13 01:03:49.142010 | orchestrator | 2026-03-13 01:03:49 | INFO  | Task bcd8c5af-ddf8-4abd-abe0-4cc27c79d7d7 is in state STARTED 2026-03-13 01:03:49.143202 | orchestrator | 2026-03-13 01:03:49 | INFO  | Task a78b234f-5c53-47f3-83b5-30fcc131e144 is in state STARTED 2026-03-13 01:03:49.143231 | orchestrator | 2026-03-13 01:03:49 | INFO  | Task 7b6a4f16-d3f8-4f5c-8d27-c9f6b0982ad9 is in state STARTED 2026-03-13 01:03:49.143236 | orchestrator | 2026-03-13 01:03:49 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:03:52.169801 | orchestrator | 2026-03-13 01:03:52 | INFO  | Task d1b44c6f-8cd1-4de8-81d2-12b995c7c7a0 is in state STARTED 2026-03-13 01:03:52.172287 | orchestrator | 2026-03-13 01:03:52 | INFO  | Task bcd8c5af-ddf8-4abd-abe0-4cc27c79d7d7 is in state STARTED 2026-03-13 01:03:52.173876 | orchestrator | 2026-03-13 01:03:52 | INFO  | Task a78b234f-5c53-47f3-83b5-30fcc131e144 is in state SUCCESS 2026-03-13 01:03:52.175418 | orchestrator | 2026-03-13 01:03:52 | INFO  | Task 7b6a4f16-d3f8-4f5c-8d27-c9f6b0982ad9 is in state STARTED 2026-03-13 01:03:52.177648 | orchestrator | 2026-03-13 01:03:52 | INFO  | Task 0de40343-0f24-4115-875e-6b817ae7ffee is in state STARTED 2026-03-13 01:03:52.177710 | orchestrator | 2026-03-13 01:03:52 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:03:55.217803 | orchestrator | 2026-03-13 01:03:55 | INFO  | Task d1b44c6f-8cd1-4de8-81d2-12b995c7c7a0 is in state STARTED 2026-03-13 01:03:55.219198 | orchestrator | 2026-03-13 01:03:55 | INFO  | Task bcd8c5af-ddf8-4abd-abe0-4cc27c79d7d7 is in state STARTED 2026-03-13 01:03:55.221671 | orchestrator | 2026-03-13 01:03:55 | INFO  | Task 7b6a4f16-d3f8-4f5c-8d27-c9f6b0982ad9 is in state STARTED 2026-03-13 01:03:55.223679 | orchestrator | 2026-03-13 01:03:55 | INFO  | Task 0de40343-0f24-4115-875e-6b817ae7ffee is in state STARTED 2026-03-13 01:03:55.224131 | orchestrator | 2026-03-13 01:03:55 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:03:58.265788 | orchestrator | 2026-03-13 01:03:58 | INFO  | Task d1b44c6f-8cd1-4de8-81d2-12b995c7c7a0 is in state STARTED 2026-03-13 01:03:58.267183 | orchestrator | 2026-03-13 01:03:58 | INFO  | Task bcd8c5af-ddf8-4abd-abe0-4cc27c79d7d7 is in state STARTED 2026-03-13 01:03:58.269465 | orchestrator | 2026-03-13 01:03:58 | INFO  | Task 7b6a4f16-d3f8-4f5c-8d27-c9f6b0982ad9 is in state STARTED 2026-03-13 01:03:58.271717 | orchestrator | 2026-03-13 01:03:58 | INFO  | Task 0de40343-0f24-4115-875e-6b817ae7ffee is in state STARTED 2026-03-13 01:03:58.271769 | orchestrator | 2026-03-13 01:03:58 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:04:01.308925 | orchestrator | 2026-03-13 01:04:01 | INFO  | Task d1b44c6f-8cd1-4de8-81d2-12b995c7c7a0 is in state STARTED 2026-03-13 01:04:01.309040 | orchestrator | 2026-03-13 01:04:01 | INFO  | Task bcd8c5af-ddf8-4abd-abe0-4cc27c79d7d7 is in state STARTED 2026-03-13 01:04:01.309923 | orchestrator | 2026-03-13 01:04:01 | INFO  | Task 7b6a4f16-d3f8-4f5c-8d27-c9f6b0982ad9 is in state STARTED 2026-03-13 01:04:01.311069 | orchestrator | 2026-03-13 01:04:01 | INFO  | Task 0de40343-0f24-4115-875e-6b817ae7ffee is in state STARTED 2026-03-13 01:04:01.311111 | orchestrator | 2026-03-13 01:04:01 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:04:04.350793 | orchestrator | 2026-03-13 01:04:04 | INFO  | Task d1b44c6f-8cd1-4de8-81d2-12b995c7c7a0 is in state STARTED 2026-03-13 01:04:04.352887 | orchestrator | 2026-03-13 01:04:04 | INFO  | Task bcd8c5af-ddf8-4abd-abe0-4cc27c79d7d7 is in state STARTED 2026-03-13 01:04:04.355219 | orchestrator | 2026-03-13 01:04:04 | INFO  | Task 7b6a4f16-d3f8-4f5c-8d27-c9f6b0982ad9 is in state STARTED 2026-03-13 01:04:04.359033 | orchestrator | 2026-03-13 01:04:04 | INFO  | Task 0de40343-0f24-4115-875e-6b817ae7ffee is in state STARTED 2026-03-13 01:04:04.359089 | orchestrator | 2026-03-13 01:04:04 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:04:07.389568 | orchestrator | 2026-03-13 01:04:07 | INFO  | Task d1b44c6f-8cd1-4de8-81d2-12b995c7c7a0 is in state STARTED 2026-03-13 01:04:07.389640 | orchestrator | 2026-03-13 01:04:07 | INFO  | Task bcd8c5af-ddf8-4abd-abe0-4cc27c79d7d7 is in state STARTED 2026-03-13 01:04:07.390958 | orchestrator | 2026-03-13 01:04:07 | INFO  | Task 7b6a4f16-d3f8-4f5c-8d27-c9f6b0982ad9 is in state STARTED 2026-03-13 01:04:07.392594 | orchestrator | 2026-03-13 01:04:07 | INFO  | Task 0de40343-0f24-4115-875e-6b817ae7ffee is in state STARTED 2026-03-13 01:04:07.392653 | orchestrator | 2026-03-13 01:04:07 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:04:10.445550 | orchestrator | 2026-03-13 01:04:10 | INFO  | Task d1b44c6f-8cd1-4de8-81d2-12b995c7c7a0 is in state STARTED 2026-03-13 01:04:10.449099 | orchestrator | 2026-03-13 01:04:10 | INFO  | Task bcd8c5af-ddf8-4abd-abe0-4cc27c79d7d7 is in state STARTED 2026-03-13 01:04:10.452298 | orchestrator | 2026-03-13 01:04:10 | INFO  | Task 7b6a4f16-d3f8-4f5c-8d27-c9f6b0982ad9 is in state STARTED 2026-03-13 01:04:10.454003 | orchestrator | 2026-03-13 01:04:10 | INFO  | Task 0de40343-0f24-4115-875e-6b817ae7ffee is in state STARTED 2026-03-13 01:04:10.454083 | orchestrator | 2026-03-13 01:04:10 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:04:13.513771 | orchestrator | 2026-03-13 01:04:13 | INFO  | Task d1b44c6f-8cd1-4de8-81d2-12b995c7c7a0 is in state STARTED 2026-03-13 01:04:13.522920 | orchestrator | 2026-03-13 01:04:13 | INFO  | Task bcd8c5af-ddf8-4abd-abe0-4cc27c79d7d7 is in state STARTED 2026-03-13 01:04:13.527506 | orchestrator | 2026-03-13 01:04:13 | INFO  | Task 7b6a4f16-d3f8-4f5c-8d27-c9f6b0982ad9 is in state STARTED 2026-03-13 01:04:13.529662 | orchestrator | 2026-03-13 01:04:13 | INFO  | Task 0de40343-0f24-4115-875e-6b817ae7ffee is in state STARTED 2026-03-13 01:04:13.529795 | orchestrator | 2026-03-13 01:04:13 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:04:16.565557 | orchestrator | 2026-03-13 01:04:16 | INFO  | Task d1b44c6f-8cd1-4de8-81d2-12b995c7c7a0 is in state STARTED 2026-03-13 01:04:16.566505 | orchestrator | 2026-03-13 01:04:16 | INFO  | Task bcd8c5af-ddf8-4abd-abe0-4cc27c79d7d7 is in state STARTED 2026-03-13 01:04:16.567563 | orchestrator | 2026-03-13 01:04:16 | INFO  | Task 7b6a4f16-d3f8-4f5c-8d27-c9f6b0982ad9 is in state STARTED 2026-03-13 01:04:16.568811 | orchestrator | 2026-03-13 01:04:16 | INFO  | Task 0de40343-0f24-4115-875e-6b817ae7ffee is in state STARTED 2026-03-13 01:04:16.568848 | orchestrator | 2026-03-13 01:04:16 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:04:19.601291 | orchestrator | 2026-03-13 01:04:19 | INFO  | Task d1b44c6f-8cd1-4de8-81d2-12b995c7c7a0 is in state STARTED 2026-03-13 01:04:19.601750 | orchestrator | 2026-03-13 01:04:19 | INFO  | Task bcd8c5af-ddf8-4abd-abe0-4cc27c79d7d7 is in state STARTED 2026-03-13 01:04:19.603822 | orchestrator | 2026-03-13 01:04:19 | INFO  | Task 7b6a4f16-d3f8-4f5c-8d27-c9f6b0982ad9 is in state STARTED 2026-03-13 01:04:19.604293 | orchestrator | 2026-03-13 01:04:19 | INFO  | Task 0de40343-0f24-4115-875e-6b817ae7ffee is in state STARTED 2026-03-13 01:04:19.604319 | orchestrator | 2026-03-13 01:04:19 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:04:22.629191 | orchestrator | 2026-03-13 01:04:22 | INFO  | Task d1b44c6f-8cd1-4de8-81d2-12b995c7c7a0 is in state STARTED 2026-03-13 01:04:22.629546 | orchestrator | 2026-03-13 01:04:22 | INFO  | Task bcd8c5af-ddf8-4abd-abe0-4cc27c79d7d7 is in state STARTED 2026-03-13 01:04:22.630240 | orchestrator | 2026-03-13 01:04:22 | INFO  | Task 7b6a4f16-d3f8-4f5c-8d27-c9f6b0982ad9 is in state STARTED 2026-03-13 01:04:22.630832 | orchestrator | 2026-03-13 01:04:22 | INFO  | Task 0de40343-0f24-4115-875e-6b817ae7ffee is in state STARTED 2026-03-13 01:04:22.630860 | orchestrator | 2026-03-13 01:04:22 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:04:25.656853 | orchestrator | 2026-03-13 01:04:25 | INFO  | Task d1b44c6f-8cd1-4de8-81d2-12b995c7c7a0 is in state STARTED 2026-03-13 01:04:25.657197 | orchestrator | 2026-03-13 01:04:25 | INFO  | Task bcd8c5af-ddf8-4abd-abe0-4cc27c79d7d7 is in state STARTED 2026-03-13 01:04:25.657804 | orchestrator | 2026-03-13 01:04:25 | INFO  | Task 7b6a4f16-d3f8-4f5c-8d27-c9f6b0982ad9 is in state STARTED 2026-03-13 01:04:25.658414 | orchestrator | 2026-03-13 01:04:25 | INFO  | Task 0de40343-0f24-4115-875e-6b817ae7ffee is in state STARTED 2026-03-13 01:04:25.658437 | orchestrator | 2026-03-13 01:04:25 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:04:28.698614 | orchestrator | 2026-03-13 01:04:28 | INFO  | Task d1b44c6f-8cd1-4de8-81d2-12b995c7c7a0 is in state STARTED 2026-03-13 01:04:28.698673 | orchestrator | 2026-03-13 01:04:28 | INFO  | Task bcd8c5af-ddf8-4abd-abe0-4cc27c79d7d7 is in state STARTED 2026-03-13 01:04:28.698682 | orchestrator | 2026-03-13 01:04:28 | INFO  | Task 7b6a4f16-d3f8-4f5c-8d27-c9f6b0982ad9 is in state STARTED 2026-03-13 01:04:28.698689 | orchestrator | 2026-03-13 01:04:28 | INFO  | Task 0de40343-0f24-4115-875e-6b817ae7ffee is in state STARTED 2026-03-13 01:04:28.698695 | orchestrator | 2026-03-13 01:04:28 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:04:31.722895 | orchestrator | 2026-03-13 01:04:31 | INFO  | Task d1b44c6f-8cd1-4de8-81d2-12b995c7c7a0 is in state SUCCESS 2026-03-13 01:04:31.723847 | orchestrator | 2026-03-13 01:04:31.723953 | orchestrator | 2026-03-13 01:04:31.723988 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-13 01:04:31.723994 | orchestrator | 2026-03-13 01:04:31.723998 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-13 01:04:31.724001 | orchestrator | Friday 13 March 2026 01:03:18 +0000 (0:00:00.749) 0:00:00.749 ********** 2026-03-13 01:04:31.724005 | orchestrator | ok: [testbed-manager] 2026-03-13 01:04:31.724009 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:04:31.724012 | orchestrator | ok: [testbed-node-1] 2026-03-13 01:04:31.724015 | orchestrator | ok: [testbed-node-2] 2026-03-13 01:04:31.724019 | orchestrator | ok: [testbed-node-3] 2026-03-13 01:04:31.724022 | orchestrator | ok: [testbed-node-4] 2026-03-13 01:04:31.724025 | orchestrator | ok: [testbed-node-5] 2026-03-13 01:04:31.724045 | orchestrator | 2026-03-13 01:04:31.724049 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-13 01:04:31.724052 | orchestrator | Friday 13 March 2026 01:03:19 +0000 (0:00:00.988) 0:00:01.738 ********** 2026-03-13 01:04:31.724056 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-03-13 01:04:31.724060 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-03-13 01:04:31.724063 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-03-13 01:04:31.724066 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-03-13 01:04:31.724069 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-03-13 01:04:31.724073 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-03-13 01:04:31.724076 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-03-13 01:04:31.724079 | orchestrator | 2026-03-13 01:04:31.724082 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-13 01:04:31.724085 | orchestrator | 2026-03-13 01:04:31.724088 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-03-13 01:04:31.724092 | orchestrator | Friday 13 March 2026 01:03:20 +0000 (0:00:00.793) 0:00:02.531 ********** 2026-03-13 01:04:31.724095 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-13 01:04:31.724100 | orchestrator | 2026-03-13 01:04:31.724103 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-03-13 01:04:31.724106 | orchestrator | Friday 13 March 2026 01:03:22 +0000 (0:00:01.478) 0:00:04.010 ********** 2026-03-13 01:04:31.724201 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-03-13 01:04:31.724227 | orchestrator | 2026-03-13 01:04:31.724233 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-03-13 01:04:31.724238 | orchestrator | Friday 13 March 2026 01:03:25 +0000 (0:00:03.041) 0:00:07.052 ********** 2026-03-13 01:04:31.724243 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-03-13 01:04:31.724250 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-03-13 01:04:31.724256 | orchestrator | 2026-03-13 01:04:31.724262 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-03-13 01:04:31.724265 | orchestrator | Friday 13 March 2026 01:03:31 +0000 (0:00:06.067) 0:00:13.119 ********** 2026-03-13 01:04:31.724268 | orchestrator | ok: [testbed-manager] => (item=service) 2026-03-13 01:04:31.724271 | orchestrator | 2026-03-13 01:04:31.724275 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-03-13 01:04:31.724278 | orchestrator | Friday 13 March 2026 01:03:33 +0000 (0:00:02.675) 0:00:15.795 ********** 2026-03-13 01:04:31.724281 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-03-13 01:04:31.724284 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-13 01:04:31.724287 | orchestrator | 2026-03-13 01:04:31.724290 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-03-13 01:04:31.724293 | orchestrator | Friday 13 March 2026 01:03:38 +0000 (0:00:04.561) 0:00:20.357 ********** 2026-03-13 01:04:31.724296 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-03-13 01:04:31.724299 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-03-13 01:04:31.724302 | orchestrator | 2026-03-13 01:04:31.724305 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-03-13 01:04:31.724308 | orchestrator | Friday 13 March 2026 01:03:44 +0000 (0:00:06.493) 0:00:26.850 ********** 2026-03-13 01:04:31.724311 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-03-13 01:04:31.724315 | orchestrator | 2026-03-13 01:04:31.724318 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 01:04:31.724327 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 01:04:31.724331 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 01:04:31.724334 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 01:04:31.724337 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 01:04:31.724340 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 01:04:31.724350 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 01:04:31.724353 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 01:04:31.724356 | orchestrator | 2026-03-13 01:04:31.724359 | orchestrator | 2026-03-13 01:04:31.724362 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 01:04:31.724365 | orchestrator | Friday 13 March 2026 01:03:49 +0000 (0:00:04.965) 0:00:31.816 ********** 2026-03-13 01:04:31.724368 | orchestrator | =============================================================================== 2026-03-13 01:04:31.724371 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.49s 2026-03-13 01:04:31.724374 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.07s 2026-03-13 01:04:31.724381 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.96s 2026-03-13 01:04:31.724384 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.56s 2026-03-13 01:04:31.724387 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.04s 2026-03-13 01:04:31.724390 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 2.68s 2026-03-13 01:04:31.724393 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.48s 2026-03-13 01:04:31.724396 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.99s 2026-03-13 01:04:31.724399 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.79s 2026-03-13 01:04:31.724402 | orchestrator | 2026-03-13 01:04:31.724405 | orchestrator | 2026-03-13 01:04:31.724408 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-13 01:04:31.724411 | orchestrator | 2026-03-13 01:04:31.724414 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-13 01:04:31.724417 | orchestrator | Friday 13 March 2026 01:02:43 +0000 (0:00:00.238) 0:00:00.238 ********** 2026-03-13 01:04:31.724420 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:04:31.724423 | orchestrator | ok: [testbed-node-1] 2026-03-13 01:04:31.724426 | orchestrator | ok: [testbed-node-2] 2026-03-13 01:04:31.724429 | orchestrator | 2026-03-13 01:04:31.724433 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-13 01:04:31.724436 | orchestrator | Friday 13 March 2026 01:02:44 +0000 (0:00:00.318) 0:00:00.556 ********** 2026-03-13 01:04:31.724439 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-03-13 01:04:31.724442 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-03-13 01:04:31.724445 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-03-13 01:04:31.724448 | orchestrator | 2026-03-13 01:04:31.724451 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-03-13 01:04:31.724454 | orchestrator | 2026-03-13 01:04:31.724457 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-13 01:04:31.724460 | orchestrator | Friday 13 March 2026 01:02:44 +0000 (0:00:00.394) 0:00:00.950 ********** 2026-03-13 01:04:31.724463 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 01:04:31.724466 | orchestrator | 2026-03-13 01:04:31.724470 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-03-13 01:04:31.724473 | orchestrator | Friday 13 March 2026 01:02:44 +0000 (0:00:00.436) 0:00:01.387 ********** 2026-03-13 01:04:31.724476 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-03-13 01:04:31.724479 | orchestrator | 2026-03-13 01:04:31.724482 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-03-13 01:04:31.724485 | orchestrator | Friday 13 March 2026 01:02:47 +0000 (0:00:02.733) 0:00:04.121 ********** 2026-03-13 01:04:31.724488 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-03-13 01:04:31.724491 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-03-13 01:04:31.724494 | orchestrator | 2026-03-13 01:04:31.724497 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-03-13 01:04:31.724500 | orchestrator | Friday 13 March 2026 01:02:53 +0000 (0:00:05.926) 0:00:10.048 ********** 2026-03-13 01:04:31.724503 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-13 01:04:31.724506 | orchestrator | 2026-03-13 01:04:31.724510 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-03-13 01:04:31.724515 | orchestrator | Friday 13 March 2026 01:02:56 +0000 (0:00:02.989) 0:00:13.037 ********** 2026-03-13 01:04:31.724520 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-03-13 01:04:31.724527 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-13 01:04:31.724532 | orchestrator | 2026-03-13 01:04:31.724541 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-03-13 01:04:31.724546 | orchestrator | Friday 13 March 2026 01:03:00 +0000 (0:00:03.809) 0:00:16.847 ********** 2026-03-13 01:04:31.724551 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-13 01:04:31.724557 | orchestrator | 2026-03-13 01:04:31.724562 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-03-13 01:04:31.724568 | orchestrator | Friday 13 March 2026 01:03:03 +0000 (0:00:03.224) 0:00:20.072 ********** 2026-03-13 01:04:31.724571 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-03-13 01:04:31.724574 | orchestrator | 2026-03-13 01:04:31.724577 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-03-13 01:04:31.724580 | orchestrator | Friday 13 March 2026 01:03:07 +0000 (0:00:03.913) 0:00:23.985 ********** 2026-03-13 01:04:31.724583 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:04:31.724587 | orchestrator | 2026-03-13 01:04:31.724590 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-03-13 01:04:31.724597 | orchestrator | Friday 13 March 2026 01:03:11 +0000 (0:00:03.987) 0:00:27.973 ********** 2026-03-13 01:04:31.724600 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:04:31.724603 | orchestrator | 2026-03-13 01:04:31.724606 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-03-13 01:04:31.724610 | orchestrator | Friday 13 March 2026 01:03:14 +0000 (0:00:03.414) 0:00:31.388 ********** 2026-03-13 01:04:31.724615 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:04:31.724621 | orchestrator | 2026-03-13 01:04:31.724628 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-03-13 01:04:31.724633 | orchestrator | Friday 13 March 2026 01:03:18 +0000 (0:00:03.136) 0:00:34.524 ********** 2026-03-13 01:04:31.724640 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-13 01:04:31.724647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-13 01:04:31.724713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-13 01:04:31.724724 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-13 01:04:31.724733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-13 01:04:31.724737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-13 01:04:31.724740 | orchestrator | 2026-03-13 01:04:31.724743 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-03-13 01:04:31.724746 | orchestrator | Friday 13 March 2026 01:03:19 +0000 (0:00:01.821) 0:00:36.346 ********** 2026-03-13 01:04:31.724750 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:04:31.724753 | orchestrator | 2026-03-13 01:04:31.724756 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-03-13 01:04:31.724759 | orchestrator | Friday 13 March 2026 01:03:20 +0000 (0:00:00.115) 0:00:36.461 ********** 2026-03-13 01:04:31.724762 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:04:31.724765 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:04:31.724768 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:04:31.724771 | orchestrator | 2026-03-13 01:04:31.724774 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-03-13 01:04:31.724777 | orchestrator | Friday 13 March 2026 01:03:20 +0000 (0:00:00.509) 0:00:36.971 ********** 2026-03-13 01:04:31.724780 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-13 01:04:31.724783 | orchestrator | 2026-03-13 01:04:31.724786 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-03-13 01:04:31.724789 | orchestrator | Friday 13 March 2026 01:03:21 +0000 (0:00:00.894) 0:00:37.865 ********** 2026-03-13 01:04:31.724795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-13 01:04:31.724800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-13 01:04:31.724806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-13 01:04:31.724810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-13 01:04:31.724813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-13 01:04:31.724819 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-13 01:04:31.724822 | orchestrator | 2026-03-13 01:04:31.724826 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-03-13 01:04:31.724829 | orchestrator | Friday 13 March 2026 01:03:23 +0000 (0:00:02.427) 0:00:40.292 ********** 2026-03-13 01:04:31.724832 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:04:31.724835 | orchestrator | ok: [testbed-node-1] 2026-03-13 01:04:31.724838 | orchestrator | ok: [testbed-node-2] 2026-03-13 01:04:31.724841 | orchestrator | 2026-03-13 01:04:31.724844 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-13 01:04:31.724847 | orchestrator | Friday 13 March 2026 01:03:24 +0000 (0:00:00.325) 0:00:40.618 ********** 2026-03-13 01:04:31.724855 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 01:04:31.724859 | orchestrator | 2026-03-13 01:04:31.724862 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-03-13 01:04:31.724865 | orchestrator | Friday 13 March 2026 01:03:24 +0000 (0:00:00.671) 0:00:41.289 ********** 2026-03-13 01:04:31.724871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-13 01:04:31.724875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-13 01:04:31.724878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-13 01:04:31.724884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-13 01:04:31.724889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-13 01:04:31.724896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-13 01:04:31.724900 | orchestrator | 2026-03-13 01:04:31.724913 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-03-13 01:04:31.724919 | orchestrator | Friday 13 March 2026 01:03:27 +0000 (0:00:02.710) 0:00:44.000 ********** 2026-03-13 01:04:31.724923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-13 01:04:31.724931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-13 01:04:31.724934 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:04:31.724937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-13 01:04:31.724942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-13 01:04:31.724946 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:04:31.724952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-13 01:04:31.724955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-13 01:04:31.724962 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:04:31.724965 | orchestrator | 2026-03-13 01:04:31.724968 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-03-13 01:04:31.724971 | orchestrator | Friday 13 March 2026 01:03:28 +0000 (0:00:01.107) 0:00:45.107 ********** 2026-03-13 01:04:31.724974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-13 01:04:31.724978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-13 01:04:31.724981 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:04:31.724988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-13 01:04:31.724991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-13 01:04:31.724997 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:04:31.725004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-13 01:04:31.725012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-13 01:04:31.725017 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:04:31.725022 | orchestrator | 2026-03-13 01:04:31.725026 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-03-13 01:04:31.725031 | orchestrator | Friday 13 March 2026 01:03:30 +0000 (0:00:01.441) 0:00:46.549 ********** 2026-03-13 01:04:31.725039 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-13 01:04:31.725049 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-13 01:04:31.725054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-13 01:04:31.725063 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-13 01:04:31.725068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-13 01:04:31.725075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-13 01:04:31.725080 | orchestrator | 2026-03-13 01:04:31.725085 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-03-13 01:04:31.725090 | orchestrator | Friday 13 March 2026 01:03:32 +0000 (0:00:02.408) 0:00:48.958 ********** 2026-03-13 01:04:31.725098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-13 01:04:31.725106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-13 01:04:31.725112 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-13 01:04:31.725117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-13 01:04:31.725124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-13 01:04:31.725132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-13 01:04:31.725140 | orchestrator | 2026-03-13 01:04:31.725145 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-03-13 01:04:31.725150 | orchestrator | Friday 13 March 2026 01:03:39 +0000 (0:00:06.516) 0:00:55.474 ********** 2026-03-13 01:04:31.725155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-13 01:04:31.725161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-13 01:04:31.725166 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:04:31.725171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-13 01:04:31.725178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-13 01:04:31.725183 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:04:31.725191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-13 01:04:31.725201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-13 01:04:31.725206 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:04:31.725211 | orchestrator | 2026-03-13 01:04:31.725216 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-03-13 01:04:31.725220 | orchestrator | Friday 13 March 2026 01:03:41 +0000 (0:00:02.055) 0:00:57.530 ********** 2026-03-13 01:04:31.725225 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-13 01:04:31.725232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-13 01:04:31.725238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-13 01:04:31.725248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-13 01:04:31.725253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-13 01:04:31.725258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-13 01:04:31.725264 | orchestrator | 2026-03-13 01:04:31.725269 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-13 01:04:31.725273 | orchestrator | Friday 13 March 2026 01:03:44 +0000 (0:00:03.347) 0:01:00.877 ********** 2026-03-13 01:04:31.725278 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:04:31.725283 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:04:31.725288 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:04:31.725293 | orchestrator | 2026-03-13 01:04:31.725298 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-03-13 01:04:31.725303 | orchestrator | Friday 13 March 2026 01:03:45 +0000 (0:00:00.686) 0:01:01.564 ********** 2026-03-13 01:04:31.725307 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:04:31.725312 | orchestrator | 2026-03-13 01:04:31.725317 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-03-13 01:04:31.725322 | orchestrator | Friday 13 March 2026 01:03:47 +0000 (0:00:02.819) 0:01:04.383 ********** 2026-03-13 01:04:31.725327 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:04:31.725331 | orchestrator | 2026-03-13 01:04:31.725336 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-03-13 01:04:31.725344 | orchestrator | Friday 13 March 2026 01:03:50 +0000 (0:00:02.313) 0:01:06.696 ********** 2026-03-13 01:04:31.725350 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:04:31.725355 | orchestrator | 2026-03-13 01:04:31.725362 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-13 01:04:31.725368 | orchestrator | Friday 13 March 2026 01:04:04 +0000 (0:00:14.439) 0:01:21.136 ********** 2026-03-13 01:04:31.725374 | orchestrator | 2026-03-13 01:04:31.725380 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-13 01:04:31.725386 | orchestrator | Friday 13 March 2026 01:04:04 +0000 (0:00:00.066) 0:01:21.202 ********** 2026-03-13 01:04:31.725391 | orchestrator | 2026-03-13 01:04:31.725396 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-13 01:04:31.725401 | orchestrator | Friday 13 March 2026 01:04:04 +0000 (0:00:00.060) 0:01:21.263 ********** 2026-03-13 01:04:31.725407 | orchestrator | 2026-03-13 01:04:31.725412 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-03-13 01:04:31.725415 | orchestrator | Friday 13 March 2026 01:04:04 +0000 (0:00:00.068) 0:01:21.331 ********** 2026-03-13 01:04:31.725419 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:04:31.725423 | orchestrator | changed: [testbed-node-1] 2026-03-13 01:04:31.725426 | orchestrator | changed: [testbed-node-2] 2026-03-13 01:04:31.725430 | orchestrator | 2026-03-13 01:04:31.725434 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-03-13 01:04:31.725440 | orchestrator | Friday 13 March 2026 01:04:15 +0000 (0:00:10.782) 0:01:32.114 ********** 2026-03-13 01:04:31.725444 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:04:31.725447 | orchestrator | changed: [testbed-node-2] 2026-03-13 01:04:31.725451 | orchestrator | changed: [testbed-node-1] 2026-03-13 01:04:31.725454 | orchestrator | 2026-03-13 01:04:31.725458 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 01:04:31.725462 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-13 01:04:31.725466 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-13 01:04:31.725470 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-13 01:04:31.725474 | orchestrator | 2026-03-13 01:04:31.725478 | orchestrator | 2026-03-13 01:04:31.725482 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 01:04:31.725487 | orchestrator | Friday 13 March 2026 01:04:30 +0000 (0:00:14.847) 0:01:46.961 ********** 2026-03-13 01:04:31.725494 | orchestrator | =============================================================================== 2026-03-13 01:04:31.725501 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 14.85s 2026-03-13 01:04:31.725506 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 14.44s 2026-03-13 01:04:31.725511 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 10.78s 2026-03-13 01:04:31.725516 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 6.52s 2026-03-13 01:04:31.725521 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 5.93s 2026-03-13 01:04:31.725526 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.99s 2026-03-13 01:04:31.725531 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.91s 2026-03-13 01:04:31.725536 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.81s 2026-03-13 01:04:31.725540 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.41s 2026-03-13 01:04:31.725545 | orchestrator | magnum : Check magnum containers ---------------------------------------- 3.35s 2026-03-13 01:04:31.725549 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.23s 2026-03-13 01:04:31.725558 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.14s 2026-03-13 01:04:31.725563 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 2.99s 2026-03-13 01:04:31.725569 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.82s 2026-03-13 01:04:31.725574 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 2.73s 2026-03-13 01:04:31.725579 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.71s 2026-03-13 01:04:31.725584 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.43s 2026-03-13 01:04:31.725590 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.41s 2026-03-13 01:04:31.725594 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.31s 2026-03-13 01:04:31.725598 | orchestrator | magnum : Copying over existing policy file ------------------------------ 2.06s 2026-03-13 01:04:31.725601 | orchestrator | 2026-03-13 01:04:31 | INFO  | Task bcd8c5af-ddf8-4abd-abe0-4cc27c79d7d7 is in state STARTED 2026-03-13 01:04:31.725604 | orchestrator | 2026-03-13 01:04:31 | INFO  | Task 7b6a4f16-d3f8-4f5c-8d27-c9f6b0982ad9 is in state STARTED 2026-03-13 01:04:31.725671 | orchestrator | 2026-03-13 01:04:31 | INFO  | Task 0de40343-0f24-4115-875e-6b817ae7ffee is in state STARTED 2026-03-13 01:04:31.725783 | orchestrator | 2026-03-13 01:04:31 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:04:34.751037 | orchestrator | 2026-03-13 01:04:34 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:04:34.751731 | orchestrator | 2026-03-13 01:04:34 | INFO  | Task bcd8c5af-ddf8-4abd-abe0-4cc27c79d7d7 is in state STARTED 2026-03-13 01:04:34.752606 | orchestrator | 2026-03-13 01:04:34 | INFO  | Task 7b6a4f16-d3f8-4f5c-8d27-c9f6b0982ad9 is in state STARTED 2026-03-13 01:04:34.753442 | orchestrator | 2026-03-13 01:04:34 | INFO  | Task 0de40343-0f24-4115-875e-6b817ae7ffee is in state STARTED 2026-03-13 01:04:34.753473 | orchestrator | 2026-03-13 01:04:34 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:04:37.779716 | orchestrator | 2026-03-13 01:04:37 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:04:37.779764 | orchestrator | 2026-03-13 01:04:37 | INFO  | Task bcd8c5af-ddf8-4abd-abe0-4cc27c79d7d7 is in state STARTED 2026-03-13 01:04:37.779768 | orchestrator | 2026-03-13 01:04:37 | INFO  | Task 7b6a4f16-d3f8-4f5c-8d27-c9f6b0982ad9 is in state STARTED 2026-03-13 01:04:37.779772 | orchestrator | 2026-03-13 01:04:37 | INFO  | Task 0de40343-0f24-4115-875e-6b817ae7ffee is in state STARTED 2026-03-13 01:04:37.779776 | orchestrator | 2026-03-13 01:04:37 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:04:40.799477 | orchestrator | 2026-03-13 01:04:40 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:04:40.800078 | orchestrator | 2026-03-13 01:04:40 | INFO  | Task bcd8c5af-ddf8-4abd-abe0-4cc27c79d7d7 is in state STARTED 2026-03-13 01:04:40.800873 | orchestrator | 2026-03-13 01:04:40 | INFO  | Task 7b6a4f16-d3f8-4f5c-8d27-c9f6b0982ad9 is in state STARTED 2026-03-13 01:04:40.801750 | orchestrator | 2026-03-13 01:04:40 | INFO  | Task 0de40343-0f24-4115-875e-6b817ae7ffee is in state STARTED 2026-03-13 01:04:40.801798 | orchestrator | 2026-03-13 01:04:40 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:04:43.826223 | orchestrator | 2026-03-13 01:04:43 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:04:43.826567 | orchestrator | 2026-03-13 01:04:43 | INFO  | Task bcd8c5af-ddf8-4abd-abe0-4cc27c79d7d7 is in state STARTED 2026-03-13 01:04:43.827506 | orchestrator | 2026-03-13 01:04:43 | INFO  | Task 7b6a4f16-d3f8-4f5c-8d27-c9f6b0982ad9 is in state STARTED 2026-03-13 01:04:43.828344 | orchestrator | 2026-03-13 01:04:43 | INFO  | Task 0de40343-0f24-4115-875e-6b817ae7ffee is in state STARTED 2026-03-13 01:04:43.828443 | orchestrator | 2026-03-13 01:04:43 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:04:46.861380 | orchestrator | 2026-03-13 01:04:46 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:04:46.863195 | orchestrator | 2026-03-13 01:04:46 | INFO  | Task bcd8c5af-ddf8-4abd-abe0-4cc27c79d7d7 is in state STARTED 2026-03-13 01:04:46.866116 | orchestrator | 2026-03-13 01:04:46 | INFO  | Task 7b6a4f16-d3f8-4f5c-8d27-c9f6b0982ad9 is in state STARTED 2026-03-13 01:04:46.868262 | orchestrator | 2026-03-13 01:04:46 | INFO  | Task 0de40343-0f24-4115-875e-6b817ae7ffee is in state STARTED 2026-03-13 01:04:46.868315 | orchestrator | 2026-03-13 01:04:46 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:04:49.889928 | orchestrator | 2026-03-13 01:04:49 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:04:49.890647 | orchestrator | 2026-03-13 01:04:49 | INFO  | Task bcd8c5af-ddf8-4abd-abe0-4cc27c79d7d7 is in state STARTED 2026-03-13 01:04:49.891135 | orchestrator | 2026-03-13 01:04:49 | INFO  | Task 7b6a4f16-d3f8-4f5c-8d27-c9f6b0982ad9 is in state STARTED 2026-03-13 01:04:49.892086 | orchestrator | 2026-03-13 01:04:49 | INFO  | Task 0de40343-0f24-4115-875e-6b817ae7ffee is in state STARTED 2026-03-13 01:04:49.892130 | orchestrator | 2026-03-13 01:04:49 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:04:52.931501 | orchestrator | 2026-03-13 01:04:52 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:04:52.932667 | orchestrator | 2026-03-13 01:04:52 | INFO  | Task bcd8c5af-ddf8-4abd-abe0-4cc27c79d7d7 is in state STARTED 2026-03-13 01:04:52.933449 | orchestrator | 2026-03-13 01:04:52 | INFO  | Task 7b6a4f16-d3f8-4f5c-8d27-c9f6b0982ad9 is in state STARTED 2026-03-13 01:04:52.934699 | orchestrator | 2026-03-13 01:04:52 | INFO  | Task 0de40343-0f24-4115-875e-6b817ae7ffee is in state STARTED 2026-03-13 01:04:52.934806 | orchestrator | 2026-03-13 01:04:52 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:04:55.969672 | orchestrator | 2026-03-13 01:04:55 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:04:55.970795 | orchestrator | 2026-03-13 01:04:55 | INFO  | Task bcd8c5af-ddf8-4abd-abe0-4cc27c79d7d7 is in state STARTED 2026-03-13 01:04:55.971219 | orchestrator | 2026-03-13 01:04:55 | INFO  | Task 7b6a4f16-d3f8-4f5c-8d27-c9f6b0982ad9 is in state STARTED 2026-03-13 01:04:55.972052 | orchestrator | 2026-03-13 01:04:55 | INFO  | Task 0de40343-0f24-4115-875e-6b817ae7ffee is in state STARTED 2026-03-13 01:04:55.972078 | orchestrator | 2026-03-13 01:04:55 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:04:59.021459 | orchestrator | 2026-03-13 01:04:59 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:04:59.021980 | orchestrator | 2026-03-13 01:04:59 | INFO  | Task bcd8c5af-ddf8-4abd-abe0-4cc27c79d7d7 is in state STARTED 2026-03-13 01:04:59.022493 | orchestrator | 2026-03-13 01:04:59 | INFO  | Task 7b6a4f16-d3f8-4f5c-8d27-c9f6b0982ad9 is in state STARTED 2026-03-13 01:04:59.023178 | orchestrator | 2026-03-13 01:04:59 | INFO  | Task 0de40343-0f24-4115-875e-6b817ae7ffee is in state STARTED 2026-03-13 01:04:59.023202 | orchestrator | 2026-03-13 01:04:59 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:05:02.052720 | orchestrator | 2026-03-13 01:05:02 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:05:02.053666 | orchestrator | 2026-03-13 01:05:02 | INFO  | Task bcd8c5af-ddf8-4abd-abe0-4cc27c79d7d7 is in state STARTED 2026-03-13 01:05:02.054244 | orchestrator | 2026-03-13 01:05:02 | INFO  | Task 7b6a4f16-d3f8-4f5c-8d27-c9f6b0982ad9 is in state STARTED 2026-03-13 01:05:02.055502 | orchestrator | 2026-03-13 01:05:02 | INFO  | Task 0de40343-0f24-4115-875e-6b817ae7ffee is in state STARTED 2026-03-13 01:05:02.055536 | orchestrator | 2026-03-13 01:05:02 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:05:05.087666 | orchestrator | 2026-03-13 01:05:05 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:05:05.088099 | orchestrator | 2026-03-13 01:05:05 | INFO  | Task bcd8c5af-ddf8-4abd-abe0-4cc27c79d7d7 is in state STARTED 2026-03-13 01:05:05.088883 | orchestrator | 2026-03-13 01:05:05 | INFO  | Task 7b6a4f16-d3f8-4f5c-8d27-c9f6b0982ad9 is in state STARTED 2026-03-13 01:05:05.089639 | orchestrator | 2026-03-13 01:05:05 | INFO  | Task 0de40343-0f24-4115-875e-6b817ae7ffee is in state STARTED 2026-03-13 01:05:05.089659 | orchestrator | 2026-03-13 01:05:05 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:05:08.132032 | orchestrator | 2026-03-13 01:05:08 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:05:08.132667 | orchestrator | 2026-03-13 01:05:08 | INFO  | Task bcd8c5af-ddf8-4abd-abe0-4cc27c79d7d7 is in state STARTED 2026-03-13 01:05:08.133380 | orchestrator | 2026-03-13 01:05:08 | INFO  | Task 7b6a4f16-d3f8-4f5c-8d27-c9f6b0982ad9 is in state STARTED 2026-03-13 01:05:08.134308 | orchestrator | 2026-03-13 01:05:08 | INFO  | Task 0de40343-0f24-4115-875e-6b817ae7ffee is in state STARTED 2026-03-13 01:05:08.134345 | orchestrator | 2026-03-13 01:05:08 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:05:11.157933 | orchestrator | 2026-03-13 01:05:11 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:05:11.159672 | orchestrator | 2026-03-13 01:05:11 | INFO  | Task bcd8c5af-ddf8-4abd-abe0-4cc27c79d7d7 is in state STARTED 2026-03-13 01:05:11.160243 | orchestrator | 2026-03-13 01:05:11 | INFO  | Task 7b6a4f16-d3f8-4f5c-8d27-c9f6b0982ad9 is in state STARTED 2026-03-13 01:05:11.161248 | orchestrator | 2026-03-13 01:05:11 | INFO  | Task 0de40343-0f24-4115-875e-6b817ae7ffee is in state STARTED 2026-03-13 01:05:11.161313 | orchestrator | 2026-03-13 01:05:11 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:05:14.197744 | orchestrator | 2026-03-13 01:05:14 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:05:14.198964 | orchestrator | 2026-03-13 01:05:14 | INFO  | Task bcd8c5af-ddf8-4abd-abe0-4cc27c79d7d7 is in state STARTED 2026-03-13 01:05:14.199267 | orchestrator | 2026-03-13 01:05:14 | INFO  | Task 7b6a4f16-d3f8-4f5c-8d27-c9f6b0982ad9 is in state STARTED 2026-03-13 01:05:14.200235 | orchestrator | 2026-03-13 01:05:14 | INFO  | Task 0de40343-0f24-4115-875e-6b817ae7ffee is in state STARTED 2026-03-13 01:05:14.200275 | orchestrator | 2026-03-13 01:05:14 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:05:17.226087 | orchestrator | 2026-03-13 01:05:17 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:05:17.226562 | orchestrator | 2026-03-13 01:05:17 | INFO  | Task bcd8c5af-ddf8-4abd-abe0-4cc27c79d7d7 is in state STARTED 2026-03-13 01:05:17.227317 | orchestrator | 2026-03-13 01:05:17 | INFO  | Task 7b6a4f16-d3f8-4f5c-8d27-c9f6b0982ad9 is in state STARTED 2026-03-13 01:05:17.228101 | orchestrator | 2026-03-13 01:05:17 | INFO  | Task 0de40343-0f24-4115-875e-6b817ae7ffee is in state STARTED 2026-03-13 01:05:17.228224 | orchestrator | 2026-03-13 01:05:17 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:05:20.256389 | orchestrator | 2026-03-13 01:05:20 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:05:20.256697 | orchestrator | 2026-03-13 01:05:20 | INFO  | Task bcd8c5af-ddf8-4abd-abe0-4cc27c79d7d7 is in state STARTED 2026-03-13 01:05:20.257638 | orchestrator | 2026-03-13 01:05:20 | INFO  | Task 7b6a4f16-d3f8-4f5c-8d27-c9f6b0982ad9 is in state STARTED 2026-03-13 01:05:20.258387 | orchestrator | 2026-03-13 01:05:20 | INFO  | Task 0de40343-0f24-4115-875e-6b817ae7ffee is in state STARTED 2026-03-13 01:05:20.258562 | orchestrator | 2026-03-13 01:05:20 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:05:23.284181 | orchestrator | 2026-03-13 01:05:23 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:05:23.284766 | orchestrator | 2026-03-13 01:05:23 | INFO  | Task bcd8c5af-ddf8-4abd-abe0-4cc27c79d7d7 is in state STARTED 2026-03-13 01:05:23.285621 | orchestrator | 2026-03-13 01:05:23 | INFO  | Task 7b6a4f16-d3f8-4f5c-8d27-c9f6b0982ad9 is in state STARTED 2026-03-13 01:05:23.286332 | orchestrator | 2026-03-13 01:05:23 | INFO  | Task 0de40343-0f24-4115-875e-6b817ae7ffee is in state STARTED 2026-03-13 01:05:23.286464 | orchestrator | 2026-03-13 01:05:23 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:05:26.317021 | orchestrator | 2026-03-13 01:05:26 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:05:26.317563 | orchestrator | 2026-03-13 01:05:26 | INFO  | Task bcd8c5af-ddf8-4abd-abe0-4cc27c79d7d7 is in state STARTED 2026-03-13 01:05:26.319213 | orchestrator | 2026-03-13 01:05:26 | INFO  | Task 7b6a4f16-d3f8-4f5c-8d27-c9f6b0982ad9 is in state STARTED 2026-03-13 01:05:26.320159 | orchestrator | 2026-03-13 01:05:26 | INFO  | Task 0de40343-0f24-4115-875e-6b817ae7ffee is in state STARTED 2026-03-13 01:05:26.320194 | orchestrator | 2026-03-13 01:05:26 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:05:29.343062 | orchestrator | 2026-03-13 01:05:29 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:05:29.343710 | orchestrator | 2026-03-13 01:05:29 | INFO  | Task bcd8c5af-ddf8-4abd-abe0-4cc27c79d7d7 is in state STARTED 2026-03-13 01:05:29.344597 | orchestrator | 2026-03-13 01:05:29 | INFO  | Task 7b6a4f16-d3f8-4f5c-8d27-c9f6b0982ad9 is in state STARTED 2026-03-13 01:05:29.346213 | orchestrator | 2026-03-13 01:05:29 | INFO  | Task 0de40343-0f24-4115-875e-6b817ae7ffee is in state STARTED 2026-03-13 01:05:29.346267 | orchestrator | 2026-03-13 01:05:29 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:05:32.386847 | orchestrator | 2026-03-13 01:05:32 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:05:32.388683 | orchestrator | 2026-03-13 01:05:32 | INFO  | Task bcd8c5af-ddf8-4abd-abe0-4cc27c79d7d7 is in state STARTED 2026-03-13 01:05:32.390182 | orchestrator | 2026-03-13 01:05:32 | INFO  | Task 7b6a4f16-d3f8-4f5c-8d27-c9f6b0982ad9 is in state STARTED 2026-03-13 01:05:32.391717 | orchestrator | 2026-03-13 01:05:32 | INFO  | Task 0de40343-0f24-4115-875e-6b817ae7ffee is in state STARTED 2026-03-13 01:05:32.391760 | orchestrator | 2026-03-13 01:05:32 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:05:35.442767 | orchestrator | 2026-03-13 01:05:35 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:05:35.442857 | orchestrator | 2026-03-13 01:05:35 | INFO  | Task bcd8c5af-ddf8-4abd-abe0-4cc27c79d7d7 is in state STARTED 2026-03-13 01:05:35.444017 | orchestrator | 2026-03-13 01:05:35 | INFO  | Task 7b6a4f16-d3f8-4f5c-8d27-c9f6b0982ad9 is in state STARTED 2026-03-13 01:05:35.444387 | orchestrator | 2026-03-13 01:05:35 | INFO  | Task 0de40343-0f24-4115-875e-6b817ae7ffee is in state STARTED 2026-03-13 01:05:35.444667 | orchestrator | 2026-03-13 01:05:35 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:05:38.475203 | orchestrator | 2026-03-13 01:05:38 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:05:38.476268 | orchestrator | 2026-03-13 01:05:38 | INFO  | Task bcd8c5af-ddf8-4abd-abe0-4cc27c79d7d7 is in state STARTED 2026-03-13 01:05:38.477675 | orchestrator | 2026-03-13 01:05:38 | INFO  | Task 7b6a4f16-d3f8-4f5c-8d27-c9f6b0982ad9 is in state STARTED 2026-03-13 01:05:38.477955 | orchestrator | 2026-03-13 01:05:38 | INFO  | Task 0de40343-0f24-4115-875e-6b817ae7ffee is in state STARTED 2026-03-13 01:05:38.478467 | orchestrator | 2026-03-13 01:05:38 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:05:41.513300 | orchestrator | 2026-03-13 01:05:41 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:05:41.514475 | orchestrator | 2026-03-13 01:05:41 | INFO  | Task bcd8c5af-ddf8-4abd-abe0-4cc27c79d7d7 is in state STARTED 2026-03-13 01:05:41.516063 | orchestrator | 2026-03-13 01:05:41 | INFO  | Task 7b6a4f16-d3f8-4f5c-8d27-c9f6b0982ad9 is in state STARTED 2026-03-13 01:05:41.516768 | orchestrator | 2026-03-13 01:05:41 | INFO  | Task 0de40343-0f24-4115-875e-6b817ae7ffee is in state STARTED 2026-03-13 01:05:41.516810 | orchestrator | 2026-03-13 01:05:41 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:05:44.578218 | orchestrator | 2026-03-13 01:05:44 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:05:44.580415 | orchestrator | 2026-03-13 01:05:44 | INFO  | Task bcd8c5af-ddf8-4abd-abe0-4cc27c79d7d7 is in state STARTED 2026-03-13 01:05:44.582203 | orchestrator | 2026-03-13 01:05:44 | INFO  | Task 7b6a4f16-d3f8-4f5c-8d27-c9f6b0982ad9 is in state STARTED 2026-03-13 01:05:44.584051 | orchestrator | 2026-03-13 01:05:44 | INFO  | Task 0de40343-0f24-4115-875e-6b817ae7ffee is in state STARTED 2026-03-13 01:05:44.584104 | orchestrator | 2026-03-13 01:05:44 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:05:47.633334 | orchestrator | 2026-03-13 01:05:47 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:05:47.635337 | orchestrator | 2026-03-13 01:05:47 | INFO  | Task bcd8c5af-ddf8-4abd-abe0-4cc27c79d7d7 is in state STARTED 2026-03-13 01:05:47.637916 | orchestrator | 2026-03-13 01:05:47 | INFO  | Task 7b6a4f16-d3f8-4f5c-8d27-c9f6b0982ad9 is in state STARTED 2026-03-13 01:05:47.640082 | orchestrator | 2026-03-13 01:05:47 | INFO  | Task 0de40343-0f24-4115-875e-6b817ae7ffee is in state STARTED 2026-03-13 01:05:47.640129 | orchestrator | 2026-03-13 01:05:47 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:05:50.703368 | orchestrator | 2026-03-13 01:05:50 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:05:50.705196 | orchestrator | 2026-03-13 01:05:50 | INFO  | Task bcd8c5af-ddf8-4abd-abe0-4cc27c79d7d7 is in state STARTED 2026-03-13 01:05:50.706907 | orchestrator | 2026-03-13 01:05:50 | INFO  | Task 7b6a4f16-d3f8-4f5c-8d27-c9f6b0982ad9 is in state STARTED 2026-03-13 01:05:50.708372 | orchestrator | 2026-03-13 01:05:50 | INFO  | Task 0de40343-0f24-4115-875e-6b817ae7ffee is in state STARTED 2026-03-13 01:05:50.708411 | orchestrator | 2026-03-13 01:05:50 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:05:53.756218 | orchestrator | 2026-03-13 01:05:53 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:05:53.758390 | orchestrator | 2026-03-13 01:05:53 | INFO  | Task bcd8c5af-ddf8-4abd-abe0-4cc27c79d7d7 is in state STARTED 2026-03-13 01:05:53.760057 | orchestrator | 2026-03-13 01:05:53 | INFO  | Task 7b6a4f16-d3f8-4f5c-8d27-c9f6b0982ad9 is in state STARTED 2026-03-13 01:05:53.761422 | orchestrator | 2026-03-13 01:05:53 | INFO  | Task 0de40343-0f24-4115-875e-6b817ae7ffee is in state STARTED 2026-03-13 01:05:53.761479 | orchestrator | 2026-03-13 01:05:53 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:05:56.807504 | orchestrator | 2026-03-13 01:05:56 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:05:56.808888 | orchestrator | 2026-03-13 01:05:56 | INFO  | Task bcd8c5af-ddf8-4abd-abe0-4cc27c79d7d7 is in state STARTED 2026-03-13 01:05:56.811189 | orchestrator | 2026-03-13 01:05:56 | INFO  | Task 7b6a4f16-d3f8-4f5c-8d27-c9f6b0982ad9 is in state STARTED 2026-03-13 01:05:56.812740 | orchestrator | 2026-03-13 01:05:56 | INFO  | Task 0de40343-0f24-4115-875e-6b817ae7ffee is in state STARTED 2026-03-13 01:05:56.812835 | orchestrator | 2026-03-13 01:05:56 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:05:59.853610 | orchestrator | 2026-03-13 01:05:59 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:05:59.855884 | orchestrator | 2026-03-13 01:05:59 | INFO  | Task bcd8c5af-ddf8-4abd-abe0-4cc27c79d7d7 is in state STARTED 2026-03-13 01:05:59.858580 | orchestrator | 2026-03-13 01:05:59 | INFO  | Task 7b6a4f16-d3f8-4f5c-8d27-c9f6b0982ad9 is in state STARTED 2026-03-13 01:05:59.860914 | orchestrator | 2026-03-13 01:05:59 | INFO  | Task 0de40343-0f24-4115-875e-6b817ae7ffee is in state STARTED 2026-03-13 01:05:59.860960 | orchestrator | 2026-03-13 01:05:59 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:06:02.907468 | orchestrator | 2026-03-13 01:06:02 | INFO  | Task c4285381-6e9a-4f3a-9d44-8287f6045e4f is in state STARTED 2026-03-13 01:06:02.909557 | orchestrator | 2026-03-13 01:06:02 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:06:02.911577 | orchestrator | 2026-03-13 01:06:02 | INFO  | Task bcd8c5af-ddf8-4abd-abe0-4cc27c79d7d7 is in state STARTED 2026-03-13 01:06:02.916700 | orchestrator | 2026-03-13 01:06:02 | INFO  | Task 7b6a4f16-d3f8-4f5c-8d27-c9f6b0982ad9 is in state SUCCESS 2026-03-13 01:06:02.918192 | orchestrator | 2026-03-13 01:06:02.918235 | orchestrator | 2026-03-13 01:06:02.918240 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-13 01:06:02.918245 | orchestrator | 2026-03-13 01:06:02.918248 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-13 01:06:02.918252 | orchestrator | Friday 13 March 2026 01:03:10 +0000 (0:00:00.259) 0:00:00.259 ********** 2026-03-13 01:06:02.918255 | orchestrator | ok: [testbed-manager] 2026-03-13 01:06:02.918259 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:06:02.918262 | orchestrator | ok: [testbed-node-1] 2026-03-13 01:06:02.918265 | orchestrator | ok: [testbed-node-2] 2026-03-13 01:06:02.918269 | orchestrator | ok: [testbed-node-3] 2026-03-13 01:06:02.918272 | orchestrator | ok: [testbed-node-4] 2026-03-13 01:06:02.918275 | orchestrator | ok: [testbed-node-5] 2026-03-13 01:06:02.918278 | orchestrator | 2026-03-13 01:06:02.918281 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-13 01:06:02.918284 | orchestrator | Friday 13 March 2026 01:03:11 +0000 (0:00:00.758) 0:00:01.017 ********** 2026-03-13 01:06:02.918288 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-03-13 01:06:02.918291 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-03-13 01:06:02.918294 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-03-13 01:06:02.918297 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-03-13 01:06:02.918314 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-03-13 01:06:02.918317 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-03-13 01:06:02.918320 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-03-13 01:06:02.918324 | orchestrator | 2026-03-13 01:06:02.918327 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-03-13 01:06:02.918330 | orchestrator | 2026-03-13 01:06:02.918333 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-13 01:06:02.918336 | orchestrator | Friday 13 March 2026 01:03:12 +0000 (0:00:00.606) 0:00:01.624 ********** 2026-03-13 01:06:02.918339 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-13 01:06:02.918343 | orchestrator | 2026-03-13 01:06:02.918346 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-03-13 01:06:02.918349 | orchestrator | Friday 13 March 2026 01:03:13 +0000 (0:00:01.337) 0:00:02.961 ********** 2026-03-13 01:06:02.918354 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-13 01:06:02.918359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-13 01:06:02.918363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-13 01:06:02.918373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-13 01:06:02.918391 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-13 01:06:02.918403 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-13 01:06:02.918408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 01:06:02.918411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 01:06:02.918414 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-13 01:06:02.918418 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-13 01:06:02.918422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 01:06:02.918426 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-13 01:06:02.918431 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-13 01:06:02.918438 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 01:06:02.918493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 01:06:02.918499 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-13 01:06:02.918503 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-13 01:06:02.918509 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-13 01:06:02.918514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 01:06:02.918520 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-13 01:06:02.918529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-13 01:06:02.918534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-13 01:06:02.918540 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-13 01:06:02.918545 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 01:06:02.918550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-13 01:06:02.918562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 01:06:02.918570 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-13 01:06:02.918582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 01:06:02.918588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 01:06:02.918593 | orchestrator | 2026-03-13 01:06:02.918644 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-13 01:06:02.918651 | orchestrator | Friday 13 March 2026 01:03:16 +0000 (0:00:02.675) 0:00:05.637 ********** 2026-03-13 01:06:02.918657 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-13 01:06:02.918662 | orchestrator | 2026-03-13 01:06:02.918796 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-03-13 01:06:02.918805 | orchestrator | Friday 13 March 2026 01:03:17 +0000 (0:00:01.279) 0:00:06.916 ********** 2026-03-13 01:06:02.918813 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-13 01:06:02.918825 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-13 01:06:02.918832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-13 01:06:02.918840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-13 01:06:02.919035 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-13 01:06:02.919043 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-13 01:06:02.919052 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-13 01:06:02.919056 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-13 01:06:02.919061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 01:06:02.919065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 01:06:02.919069 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 01:06:02.919085 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-13 01:06:02.919102 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-13 01:06:02.919106 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-13 01:06:02.919110 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-13 01:06:02.919114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 01:06:02.919118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 01:06:02.919127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 01:06:02.919132 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-13 01:06:02.919141 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-13 01:06:02.919149 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-13 01:06:02.919153 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-13 01:06:02.919158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-13 01:06:02.919162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-13 01:06:02.919166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-13 01:06:02.919170 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 01:06:02.919191 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 01:06:02.919198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 01:06:02.919202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 01:06:02.919206 | orchestrator | 2026-03-13 01:06:02.919210 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-03-13 01:06:02.919237 | orchestrator | Friday 13 March 2026 01:03:22 +0000 (0:00:05.217) 0:00:12.134 ********** 2026-03-13 01:06:02.919247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-13 01:06:02.919251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 01:06:02.919255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 01:06:02.919568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-13 01:06:02.919781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 01:06:02.919810 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-13 01:06:02.919815 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-13 01:06:02.919826 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-13 01:06:02.919830 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-13 01:06:02.919834 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 01:06:02.919840 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:06:02.919844 | orchestrator | skipping: [testbed-manager] 2026-03-13 01:06:02.919847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-13 01:06:02.919853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 01:06:02.919871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 01:06:02.919875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-13 01:06:02.919878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 01:06:02.919881 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:06:02.919885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-13 01:06:02.919888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 01:06:02.919894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 01:06:02.919902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-13 01:06:02.919908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 01:06:02.919915 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:06:02.919928 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-13 01:06:02.919932 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-13 01:06:02.919935 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-13 01:06:02.919939 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:06:02.919942 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-13 01:06:02.919951 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-13 01:06:02.919957 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-13 01:06:02.919962 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:06:02.919967 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-13 01:06:02.919975 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-13 01:06:02.919994 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-13 01:06:02.919998 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:06:02.920055 | orchestrator | 2026-03-13 01:06:02.920062 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-03-13 01:06:02.920066 | orchestrator | Friday 13 March 2026 01:03:24 +0000 (0:00:01.634) 0:00:13.769 ********** 2026-03-13 01:06:02.920069 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-13 01:06:02.920073 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-13 01:06:02.920079 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-13 01:06:02.920083 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-13 01:06:02.920112 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 01:06:02.920125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-13 01:06:02.920129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 01:06:02.920133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 01:06:02.920139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-13 01:06:02.920142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 01:06:02.920145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-13 01:06:02.920149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 01:06:02.920154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 01:06:02.920166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-13 01:06:02.920170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 01:06:02.920173 | orchestrator | skipping: [testbed-manager] 2026-03-13 01:06:02.920176 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:06:02.920180 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:06:02.920185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-13 01:06:02.920188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 01:06:02.920192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 01:06:02.920200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-13 01:06:02.920206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-13 01:06:02.920210 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:06:02.920223 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-13 01:06:02.920229 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-13 01:06:02.920234 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-13 01:06:02.920244 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:06:02.920249 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-13 01:06:02.920254 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-13 01:06:02.920350 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-13 01:06:02.920356 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:06:02.920362 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-13 01:06:02.920372 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-13 01:06:02.920395 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-13 01:06:02.920401 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:06:02.920406 | orchestrator | 2026-03-13 01:06:02.920412 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-03-13 01:06:02.920416 | orchestrator | Friday 13 March 2026 01:03:26 +0000 (0:00:02.232) 0:00:16.002 ********** 2026-03-13 01:06:02.920424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-13 01:06:02.920430 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-13 01:06:02.920435 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-13 01:06:02.920441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-13 01:06:02.920446 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-13 01:06:02.920455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 01:06:02.920477 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 01:06:02.920483 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-13 01:06:02.920493 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-13 01:06:02.920499 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-13 01:06:02.920502 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 01:06:02.920506 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 01:06:02.920509 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-13 01:06:02.920514 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-13 01:06:02.920531 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-13 01:06:02.920542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 01:06:02.920548 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-13 01:06:02.920554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-13 01:06:02.920560 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-13 01:06:02.920565 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-13 01:06:02.920574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-13 01:06:02.920594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 01:06:02.920604 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-13 01:06:02.920610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 01:06:02.920616 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-13 01:06:02.920619 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 01:06:02.920622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-13 01:06:02.920626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 01:06:02.920632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 01:06:02.920637 | orchestrator | 2026-03-13 01:06:02.920641 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-03-13 01:06:02.920644 | orchestrator | Friday 13 March 2026 01:03:33 +0000 (0:00:06.287) 0:00:22.289 ********** 2026-03-13 01:06:02.920647 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-13 01:06:02.920650 | orchestrator | 2026-03-13 01:06:02.920653 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-03-13 01:06:02.920667 | orchestrator | Friday 13 March 2026 01:03:34 +0000 (0:00:01.277) 0:00:23.567 ********** 2026-03-13 01:06:02.920671 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1328098, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3716924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.920674 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1328098, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3716924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.920678 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1328098, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3716924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.920681 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1328098, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3716924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-13 01:06:02.920685 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1328162, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3884275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.920690 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1328162, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3884275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.920704 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1328162, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3884275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.920708 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1328098, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3716924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.920711 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1328090, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3713665, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.920714 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1328098, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3716924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.920718 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1328098, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3716924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.920721 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1328090, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3713665, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.920726 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1328162, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3884275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.920740 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1328131, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.381052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.920744 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1328090, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3713665, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.920747 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1328162, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3884275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-13 01:06:02.920762 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1328162, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3884275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.920768 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1328162, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3884275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.920772 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1328131, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.381052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.920781 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1328081, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.369674, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.920795 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1328131, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.381052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.920799 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1328090, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3713665, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.920802 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1328090, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3713665, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.920805 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1328081, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.369674, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.920808 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1328090, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3713665, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.920811 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1328100, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3720284, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.920818 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1328100, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3720284, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.920823 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1328131, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.381052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.920836 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1328131, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.381052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.920840 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1328090, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3713665, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-13 01:06:02.920843 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1328081, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.369674, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.920846 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1328081, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.369674, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.920850 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1328131, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.381052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.920855 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1328100, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3720284, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.920860 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1328123, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3778317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.920873 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1328081, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.369674, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.920877 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1328123, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3778317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.920880 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1328100, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3720284, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.920883 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1328104, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.372946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.920886 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1328081, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.369674, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.920892 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1328123, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3778317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.920897 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1328104, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.372946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.920909 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1328100, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3720284, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.920913 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1328095, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3716924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.920916 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1328095, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3716924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.920920 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328159, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3876486, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.920925 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1328100, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3720284, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.920928 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1328123, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3778317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.920934 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1328123, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3778317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.920946 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328159, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3876486, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.920950 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1328104, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.372946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.920953 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1328131, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.381052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-13 01:06:02.920956 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328073, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3683176, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.920962 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1328123, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3778317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.920965 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1328104, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.372946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.920970 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1328104, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.372946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.920982 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328073, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3683176, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.920986 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1328095, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3716924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.920989 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1328175, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3901331, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.920992 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1328095, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3716924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.920998 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1328104, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.372946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.921001 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328159, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3876486, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.921005 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328159, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3876486, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.921017 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1328081, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.369674, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-13 01:06:02.921021 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328073, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3683176, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.921025 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1328095, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3716924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.921030 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1328138, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3872175, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.921033 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1328175, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3901331, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.921036 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328073, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3683176, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.921041 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1328175, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3901331, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.921055 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1328138, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3872175, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.921058 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1328095, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3716924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.921062 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328159, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3876486, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.921067 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328088, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3698728, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.921071 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1328175, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3901331, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.921074 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328088, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3698728, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.921080 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1328138, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3872175, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.921085 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328073, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3683176, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.921089 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328088, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3698728, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.921092 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1328077, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3686604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.921097 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328159, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3876486, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.921100 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1328077, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3686604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.921104 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1328138, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3872175, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.921109 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1328077, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3686604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.921116 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1328175, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3901331, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.921120 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1328100, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3720284, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-13 01:06:02.921123 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328073, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3683176, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.921129 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1328116, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3768134, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.921132 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1328138, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3872175, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.921135 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1328116, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3768134, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.921140 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328088, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3698728, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.921146 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1328116, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3768134, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.921149 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1328175, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3901331, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.921155 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1328105, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3757384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.921158 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1328105, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3757384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.921162 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1328105, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3757384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.921165 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328088, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3698728, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.921170 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1328138, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3872175, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.921175 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1328077, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3686604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.921179 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1328172, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3898904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.921185 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:06:02.921189 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1328123, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3778317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-13 01:06:02.921192 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1328172, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3898904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.921195 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:06:02.921198 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1328077, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3686604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.921202 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1328172, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3898904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.921205 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:06:02.921210 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328088, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3698728, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.921215 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1328116, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3768134, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.921223 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1328116, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3768134, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.921228 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1328077, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3686604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.921233 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1328105, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3757384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.921238 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1328116, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3768134, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.921255 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1328105, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3757384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.921263 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1328105, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3757384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.921272 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1328104, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.372946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-13 01:06:02.921281 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1328172, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3898904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.921286 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:06:02.921291 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1328172, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3898904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.921296 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:06:02.921301 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1328172, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3898904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-13 01:06:02.921305 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:06:02.921311 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1328095, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3716924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-13 01:06:02.921316 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328159, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3876486, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-13 01:06:02.921323 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328073, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3683176, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-13 01:06:02.921335 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1328175, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3901331, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-13 01:06:02.921340 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1328138, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3872175, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-13 01:06:02.921345 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328088, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3698728, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-13 01:06:02.921350 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1328077, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3686604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-13 01:06:02.921356 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1328116, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3768134, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-13 01:06:02.921361 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1328105, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3757384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-13 01:06:02.921367 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1328172, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3898904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-13 01:06:02.921376 | orchestrator | 2026-03-13 01:06:02.921379 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-03-13 01:06:02.921382 | orchestrator | Friday 13 March 2026 01:03:59 +0000 (0:00:25.000) 0:00:48.567 ********** 2026-03-13 01:06:02.921385 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-13 01:06:02.921389 | orchestrator | 2026-03-13 01:06:02.921394 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-03-13 01:06:02.921397 | orchestrator | Friday 13 March 2026 01:04:00 +0000 (0:00:00.739) 0:00:49.307 ********** 2026-03-13 01:06:02.921400 | orchestrator | [WARNING]: Skipped 2026-03-13 01:06:02.921410 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-13 01:06:02.921413 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-03-13 01:06:02.921416 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-13 01:06:02.921420 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-03-13 01:06:02.921423 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-13 01:06:02.921426 | orchestrator | [WARNING]: Skipped 2026-03-13 01:06:02.921429 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-13 01:06:02.921432 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-03-13 01:06:02.921435 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-13 01:06:02.921438 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-03-13 01:06:02.921441 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-13 01:06:02.921445 | orchestrator | [WARNING]: Skipped 2026-03-13 01:06:02.921448 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-13 01:06:02.921451 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-03-13 01:06:02.921454 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-13 01:06:02.921457 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-03-13 01:06:02.921460 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-13 01:06:02.921467 | orchestrator | [WARNING]: Skipped 2026-03-13 01:06:02.921470 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-13 01:06:02.921473 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-03-13 01:06:02.921476 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-13 01:06:02.921479 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-03-13 01:06:02.921482 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-13 01:06:02.921486 | orchestrator | [WARNING]: Skipped 2026-03-13 01:06:02.921489 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-13 01:06:02.921492 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-03-13 01:06:02.921495 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-13 01:06:02.921498 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-03-13 01:06:02.921501 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-13 01:06:02.921504 | orchestrator | [WARNING]: Skipped 2026-03-13 01:06:02.921507 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-13 01:06:02.921510 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-03-13 01:06:02.921513 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-13 01:06:02.921516 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-03-13 01:06:02.921520 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-13 01:06:02.921523 | orchestrator | [WARNING]: Skipped 2026-03-13 01:06:02.921526 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-13 01:06:02.921539 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-03-13 01:06:02.921542 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-13 01:06:02.921545 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-03-13 01:06:02.921548 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-13 01:06:02.921551 | orchestrator | 2026-03-13 01:06:02.921554 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-03-13 01:06:02.921557 | orchestrator | Friday 13 March 2026 01:04:01 +0000 (0:00:01.547) 0:00:50.854 ********** 2026-03-13 01:06:02.921560 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-13 01:06:02.921564 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:06:02.921567 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-13 01:06:02.921570 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:06:02.921573 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-13 01:06:02.921576 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:06:02.921579 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-13 01:06:02.921582 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:06:02.921587 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-13 01:06:02.921590 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:06:02.921593 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-13 01:06:02.921596 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:06:02.921599 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-03-13 01:06:02.921603 | orchestrator | 2026-03-13 01:06:02.921606 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-03-13 01:06:02.921609 | orchestrator | Friday 13 March 2026 01:04:17 +0000 (0:00:15.532) 0:01:06.387 ********** 2026-03-13 01:06:02.921614 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-13 01:06:02.921617 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-13 01:06:02.921620 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:06:02.921623 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:06:02.921626 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-13 01:06:02.921629 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:06:02.921632 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-13 01:06:02.921635 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:06:02.921638 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-13 01:06:02.921641 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:06:02.921644 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-13 01:06:02.921648 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:06:02.921651 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-03-13 01:06:02.921654 | orchestrator | 2026-03-13 01:06:02.921657 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-03-13 01:06:02.921660 | orchestrator | Friday 13 March 2026 01:04:20 +0000 (0:00:03.766) 0:01:10.153 ********** 2026-03-13 01:06:02.921663 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-13 01:06:02.921666 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:06:02.921672 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-03-13 01:06:02.921676 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-13 01:06:02.921679 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:06:02.921682 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-13 01:06:02.921685 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:06:02.921688 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-13 01:06:02.921691 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:06:02.921694 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-13 01:06:02.921698 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:06:02.921701 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-13 01:06:02.921704 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:06:02.921708 | orchestrator | 2026-03-13 01:06:02.921713 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-03-13 01:06:02.921721 | orchestrator | Friday 13 March 2026 01:04:23 +0000 (0:00:02.324) 0:01:12.478 ********** 2026-03-13 01:06:02.921727 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-13 01:06:02.921732 | orchestrator | 2026-03-13 01:06:02.921737 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-03-13 01:06:02.921741 | orchestrator | Friday 13 March 2026 01:04:24 +0000 (0:00:00.970) 0:01:13.449 ********** 2026-03-13 01:06:02.921746 | orchestrator | skipping: [testbed-manager] 2026-03-13 01:06:02.921816 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:06:02.921826 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:06:02.921832 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:06:02.921837 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:06:02.921842 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:06:02.921847 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:06:02.921852 | orchestrator | 2026-03-13 01:06:02.921858 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-03-13 01:06:02.921862 | orchestrator | Friday 13 March 2026 01:04:24 +0000 (0:00:00.644) 0:01:14.093 ********** 2026-03-13 01:06:02.921865 | orchestrator | skipping: [testbed-manager] 2026-03-13 01:06:02.921868 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:06:02.921871 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:06:02.921874 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:06:02.921883 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:06:02.921886 | orchestrator | changed: [testbed-node-1] 2026-03-13 01:06:02.921889 | orchestrator | changed: [testbed-node-2] 2026-03-13 01:06:02.921892 | orchestrator | 2026-03-13 01:06:02.921895 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-03-13 01:06:02.921898 | orchestrator | Friday 13 March 2026 01:04:27 +0000 (0:00:02.655) 0:01:16.748 ********** 2026-03-13 01:06:02.921905 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-13 01:06:02.921909 | orchestrator | skipping: [testbed-manager] 2026-03-13 01:06:02.921912 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-13 01:06:02.921915 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:06:02.921918 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-13 01:06:02.921921 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:06:02.921924 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-13 01:06:02.921927 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:06:02.921937 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-13 01:06:02.921940 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:06:02.921943 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-13 01:06:02.921946 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:06:02.921949 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-13 01:06:02.921953 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:06:02.921956 | orchestrator | 2026-03-13 01:06:02.921959 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-03-13 01:06:02.921962 | orchestrator | Friday 13 March 2026 01:04:29 +0000 (0:00:01.714) 0:01:18.463 ********** 2026-03-13 01:06:02.921965 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-13 01:06:02.921968 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:06:02.921971 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-13 01:06:02.921975 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:06:02.921978 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-13 01:06:02.921981 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:06:02.921984 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-03-13 01:06:02.921987 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-13 01:06:02.921990 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:06:02.921993 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-13 01:06:02.921996 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:06:02.922000 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-13 01:06:02.922003 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:06:02.922006 | orchestrator | 2026-03-13 01:06:02.922009 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-03-13 01:06:02.922045 | orchestrator | Friday 13 March 2026 01:04:31 +0000 (0:00:02.199) 0:01:20.663 ********** 2026-03-13 01:06:02.922050 | orchestrator | [WARNING]: Skipped 2026-03-13 01:06:02.922053 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-03-13 01:06:02.922056 | orchestrator | due to this access issue: 2026-03-13 01:06:02.922059 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-03-13 01:06:02.922063 | orchestrator | not a directory 2026-03-13 01:06:02.922066 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-13 01:06:02.922069 | orchestrator | 2026-03-13 01:06:02.922072 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-03-13 01:06:02.922075 | orchestrator | Friday 13 March 2026 01:04:33 +0000 (0:00:01.756) 0:01:22.420 ********** 2026-03-13 01:06:02.922078 | orchestrator | skipping: [testbed-manager] 2026-03-13 01:06:02.922081 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:06:02.922084 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:06:02.922087 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:06:02.922090 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:06:02.922093 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:06:02.922096 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:06:02.922099 | orchestrator | 2026-03-13 01:06:02.922103 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-03-13 01:06:02.922106 | orchestrator | Friday 13 March 2026 01:04:34 +0000 (0:00:01.162) 0:01:23.582 ********** 2026-03-13 01:06:02.922109 | orchestrator | skipping: [testbed-manager] 2026-03-13 01:06:02.922115 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:06:02.922118 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:06:02.922121 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:06:02.922126 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:06:02.922132 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:06:02.922135 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:06:02.922138 | orchestrator | 2026-03-13 01:06:02.922141 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-03-13 01:06:02.922144 | orchestrator | Friday 13 March 2026 01:04:34 +0000 (0:00:00.697) 0:01:24.280 ********** 2026-03-13 01:06:02.922152 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-13 01:06:02.922161 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-13 01:06:02.922164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-13 01:06:02.922167 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-13 01:06:02.922171 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-13 01:06:02.922177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-13 01:06:02.922185 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 01:06:02.922188 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-13 01:06:02.922193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 01:06:02.922199 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-13 01:06:02.922203 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-13 01:06:02.922206 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-13 01:06:02.922209 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-13 01:06:02.922213 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 01:06:02.922218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 01:06:02.922223 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 01:06:02.922231 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-13 01:06:02.922237 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-13 01:06:02.922241 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-13 01:06:02.922245 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-13 01:06:02.922248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 01:06:02.922253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-13 01:06:02.922257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-13 01:06:02.922262 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 01:06:02.922270 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-13 01:06:02.922274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 01:06:02.922277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 01:06:02.922280 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-13 01:06:02.922286 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-13 01:06:02.922289 | orchestrator | 2026-03-13 01:06:02.922292 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-03-13 01:06:02.922295 | orchestrator | Friday 13 March 2026 01:04:39 +0000 (0:00:04.522) 0:01:28.803 ********** 2026-03-13 01:06:02.922298 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-13 01:06:02.922301 | orchestrator | skipping: [testbed-manager] 2026-03-13 01:06:02.922304 | orchestrator | 2026-03-13 01:06:02.922308 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-13 01:06:02.922311 | orchestrator | Friday 13 March 2026 01:04:40 +0000 (0:00:01.385) 0:01:30.188 ********** 2026-03-13 01:06:02.922314 | orchestrator | 2026-03-13 01:06:02.922317 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-13 01:06:02.922320 | orchestrator | Friday 13 March 2026 01:04:40 +0000 (0:00:00.056) 0:01:30.245 ********** 2026-03-13 01:06:02.922323 | orchestrator | 2026-03-13 01:06:02.922326 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-13 01:06:02.922329 | orchestrator | Friday 13 March 2026 01:04:41 +0000 (0:00:00.054) 0:01:30.299 ********** 2026-03-13 01:06:02.922334 | orchestrator | 2026-03-13 01:06:02.922339 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-13 01:06:02.922344 | orchestrator | Friday 13 March 2026 01:04:41 +0000 (0:00:00.088) 0:01:30.387 ********** 2026-03-13 01:06:02.922349 | orchestrator | 2026-03-13 01:06:02.922352 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-13 01:06:02.922355 | orchestrator | Friday 13 March 2026 01:04:41 +0000 (0:00:00.284) 0:01:30.672 ********** 2026-03-13 01:06:02.922392 | orchestrator | 2026-03-13 01:06:02.922399 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-13 01:06:02.922410 | orchestrator | Friday 13 March 2026 01:04:41 +0000 (0:00:00.129) 0:01:30.801 ********** 2026-03-13 01:06:02.922415 | orchestrator | 2026-03-13 01:06:02.922421 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-13 01:06:02.922426 | orchestrator | Friday 13 March 2026 01:04:41 +0000 (0:00:00.053) 0:01:30.854 ********** 2026-03-13 01:06:02.922430 | orchestrator | 2026-03-13 01:06:02.922434 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-03-13 01:06:02.922439 | orchestrator | Friday 13 March 2026 01:04:41 +0000 (0:00:00.136) 0:01:30.991 ********** 2026-03-13 01:06:02.922444 | orchestrator | changed: [testbed-manager] 2026-03-13 01:06:02.922449 | orchestrator | 2026-03-13 01:06:02.922454 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-03-13 01:06:02.922463 | orchestrator | Friday 13 March 2026 01:04:54 +0000 (0:00:12.999) 0:01:43.991 ********** 2026-03-13 01:06:02.922468 | orchestrator | changed: [testbed-node-4] 2026-03-13 01:06:02.922473 | orchestrator | changed: [testbed-node-2] 2026-03-13 01:06:02.922477 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:06:02.922481 | orchestrator | changed: [testbed-node-3] 2026-03-13 01:06:02.922486 | orchestrator | changed: [testbed-manager] 2026-03-13 01:06:02.922491 | orchestrator | changed: [testbed-node-1] 2026-03-13 01:06:02.922497 | orchestrator | changed: [testbed-node-5] 2026-03-13 01:06:02.922502 | orchestrator | 2026-03-13 01:06:02.922506 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-03-13 01:06:02.922514 | orchestrator | Friday 13 March 2026 01:05:09 +0000 (0:00:14.640) 0:01:58.631 ********** 2026-03-13 01:06:02.922519 | orchestrator | changed: [testbed-node-2] 2026-03-13 01:06:02.922524 | orchestrator | changed: [testbed-node-1] 2026-03-13 01:06:02.922529 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:06:02.922533 | orchestrator | 2026-03-13 01:06:02.922539 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-03-13 01:06:02.922544 | orchestrator | Friday 13 March 2026 01:05:15 +0000 (0:00:06.167) 0:02:04.799 ********** 2026-03-13 01:06:02.922549 | orchestrator | changed: [testbed-node-1] 2026-03-13 01:06:02.922554 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:06:02.922559 | orchestrator | changed: [testbed-node-2] 2026-03-13 01:06:02.922564 | orchestrator | 2026-03-13 01:06:02.922569 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-03-13 01:06:02.922573 | orchestrator | Friday 13 March 2026 01:05:20 +0000 (0:00:05.097) 0:02:09.896 ********** 2026-03-13 01:06:02.922578 | orchestrator | changed: [testbed-manager] 2026-03-13 01:06:02.922583 | orchestrator | changed: [testbed-node-1] 2026-03-13 01:06:02.922588 | orchestrator | changed: [testbed-node-2] 2026-03-13 01:06:02.922593 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:06:02.922598 | orchestrator | changed: [testbed-node-4] 2026-03-13 01:06:02.922603 | orchestrator | changed: [testbed-node-5] 2026-03-13 01:06:02.922608 | orchestrator | changed: [testbed-node-3] 2026-03-13 01:06:02.922613 | orchestrator | 2026-03-13 01:06:02.922619 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-03-13 01:06:02.922624 | orchestrator | Friday 13 March 2026 01:05:33 +0000 (0:00:13.208) 0:02:23.105 ********** 2026-03-13 01:06:02.922629 | orchestrator | changed: [testbed-manager] 2026-03-13 01:06:02.922634 | orchestrator | 2026-03-13 01:06:02.922639 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-03-13 01:06:02.922652 | orchestrator | Friday 13 March 2026 01:05:39 +0000 (0:00:05.932) 0:02:29.038 ********** 2026-03-13 01:06:02.922658 | orchestrator | changed: [testbed-node-1] 2026-03-13 01:06:02.922663 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:06:02.922668 | orchestrator | changed: [testbed-node-2] 2026-03-13 01:06:02.922674 | orchestrator | 2026-03-13 01:06:02.922679 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-03-13 01:06:02.922684 | orchestrator | Friday 13 March 2026 01:05:45 +0000 (0:00:06.182) 0:02:35.220 ********** 2026-03-13 01:06:02.922688 | orchestrator | changed: [testbed-manager] 2026-03-13 01:06:02.922691 | orchestrator | 2026-03-13 01:06:02.922695 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-03-13 01:06:02.922700 | orchestrator | Friday 13 March 2026 01:05:51 +0000 (0:00:05.974) 0:02:41.195 ********** 2026-03-13 01:06:02.922706 | orchestrator | changed: [testbed-node-4] 2026-03-13 01:06:02.922709 | orchestrator | changed: [testbed-node-5] 2026-03-13 01:06:02.922712 | orchestrator | changed: [testbed-node-3] 2026-03-13 01:06:02.922716 | orchestrator | 2026-03-13 01:06:02.922719 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 01:06:02.922723 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-13 01:06:02.922726 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-13 01:06:02.922729 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-13 01:06:02.922732 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-13 01:06:02.922736 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-13 01:06:02.922747 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-13 01:06:02.922761 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-13 01:06:02.922768 | orchestrator | 2026-03-13 01:06:02.922771 | orchestrator | 2026-03-13 01:06:02.922774 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 01:06:02.922780 | orchestrator | Friday 13 March 2026 01:06:01 +0000 (0:00:09.548) 0:02:50.743 ********** 2026-03-13 01:06:02.922783 | orchestrator | =============================================================================== 2026-03-13 01:06:02.922786 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 25.00s 2026-03-13 01:06:02.922789 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 15.53s 2026-03-13 01:06:02.922795 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 14.64s 2026-03-13 01:06:02.922800 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 13.21s 2026-03-13 01:06:02.922804 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 13.00s 2026-03-13 01:06:02.922811 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 9.55s 2026-03-13 01:06:02.922814 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.29s 2026-03-13 01:06:02.922817 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 6.18s 2026-03-13 01:06:02.922820 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 6.17s 2026-03-13 01:06:02.922823 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.97s 2026-03-13 01:06:02.922826 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 5.93s 2026-03-13 01:06:02.922829 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.22s 2026-03-13 01:06:02.922832 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 5.10s 2026-03-13 01:06:02.922835 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.52s 2026-03-13 01:06:02.922838 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.77s 2026-03-13 01:06:02.922841 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 2.68s 2026-03-13 01:06:02.922844 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.66s 2026-03-13 01:06:02.922847 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 2.33s 2026-03-13 01:06:02.922851 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.23s 2026-03-13 01:06:02.922854 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 2.20s 2026-03-13 01:06:02.922859 | orchestrator | 2026-03-13 01:06:02 | INFO  | Task 0de40343-0f24-4115-875e-6b817ae7ffee is in state STARTED 2026-03-13 01:06:02.922864 | orchestrator | 2026-03-13 01:06:02 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:06:05.956217 | orchestrator | 2026-03-13 01:06:05 | INFO  | Task c4285381-6e9a-4f3a-9d44-8287f6045e4f is in state STARTED 2026-03-13 01:06:05.956279 | orchestrator | 2026-03-13 01:06:05 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:06:05.957000 | orchestrator | 2026-03-13 01:06:05 | INFO  | Task bcd8c5af-ddf8-4abd-abe0-4cc27c79d7d7 is in state STARTED 2026-03-13 01:06:05.957603 | orchestrator | 2026-03-13 01:06:05 | INFO  | Task 0de40343-0f24-4115-875e-6b817ae7ffee is in state STARTED 2026-03-13 01:06:05.957713 | orchestrator | 2026-03-13 01:06:05 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:06:08.982279 | orchestrator | 2026-03-13 01:06:08 | INFO  | Task c4285381-6e9a-4f3a-9d44-8287f6045e4f is in state STARTED 2026-03-13 01:06:08.982686 | orchestrator | 2026-03-13 01:06:08 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:06:08.983500 | orchestrator | 2026-03-13 01:06:08 | INFO  | Task bcd8c5af-ddf8-4abd-abe0-4cc27c79d7d7 is in state STARTED 2026-03-13 01:06:08.984207 | orchestrator | 2026-03-13 01:06:08 | INFO  | Task 0de40343-0f24-4115-875e-6b817ae7ffee is in state STARTED 2026-03-13 01:06:08.984240 | orchestrator | 2026-03-13 01:06:08 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:06:12.034729 | orchestrator | 2026-03-13 01:06:12 | INFO  | Task c4285381-6e9a-4f3a-9d44-8287f6045e4f is in state STARTED 2026-03-13 01:06:12.036012 | orchestrator | 2026-03-13 01:06:12 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:06:12.037454 | orchestrator | 2026-03-13 01:06:12 | INFO  | Task bcd8c5af-ddf8-4abd-abe0-4cc27c79d7d7 is in state STARTED 2026-03-13 01:06:12.039073 | orchestrator | 2026-03-13 01:06:12 | INFO  | Task 0de40343-0f24-4115-875e-6b817ae7ffee is in state STARTED 2026-03-13 01:06:12.039108 | orchestrator | 2026-03-13 01:06:12 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:06:15.066423 | orchestrator | 2026-03-13 01:06:15 | INFO  | Task c4285381-6e9a-4f3a-9d44-8287f6045e4f is in state STARTED 2026-03-13 01:06:15.066995 | orchestrator | 2026-03-13 01:06:15 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:06:15.067715 | orchestrator | 2026-03-13 01:06:15 | INFO  | Task bcd8c5af-ddf8-4abd-abe0-4cc27c79d7d7 is in state STARTED 2026-03-13 01:06:15.068675 | orchestrator | 2026-03-13 01:06:15 | INFO  | Task 0de40343-0f24-4115-875e-6b817ae7ffee is in state STARTED 2026-03-13 01:06:15.068701 | orchestrator | 2026-03-13 01:06:15 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:06:18.091109 | orchestrator | 2026-03-13 01:06:18 | INFO  | Task c4285381-6e9a-4f3a-9d44-8287f6045e4f is in state STARTED 2026-03-13 01:06:18.092693 | orchestrator | 2026-03-13 01:06:18 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:06:18.093138 | orchestrator | 2026-03-13 01:06:18 | INFO  | Task bcd8c5af-ddf8-4abd-abe0-4cc27c79d7d7 is in state STARTED 2026-03-13 01:06:18.093811 | orchestrator | 2026-03-13 01:06:18 | INFO  | Task 0de40343-0f24-4115-875e-6b817ae7ffee is in state STARTED 2026-03-13 01:06:18.093831 | orchestrator | 2026-03-13 01:06:18 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:06:21.118062 | orchestrator | 2026-03-13 01:06:21 | INFO  | Task c4285381-6e9a-4f3a-9d44-8287f6045e4f is in state STARTED 2026-03-13 01:06:21.118573 | orchestrator | 2026-03-13 01:06:21 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:06:21.119489 | orchestrator | 2026-03-13 01:06:21 | INFO  | Task bcd8c5af-ddf8-4abd-abe0-4cc27c79d7d7 is in state STARTED 2026-03-13 01:06:21.120034 | orchestrator | 2026-03-13 01:06:21 | INFO  | Task 0de40343-0f24-4115-875e-6b817ae7ffee is in state STARTED 2026-03-13 01:06:21.120052 | orchestrator | 2026-03-13 01:06:21 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:06:24.153402 | orchestrator | 2026-03-13 01:06:24 | INFO  | Task c4285381-6e9a-4f3a-9d44-8287f6045e4f is in state STARTED 2026-03-13 01:06:24.156225 | orchestrator | 2026-03-13 01:06:24 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:06:24.158793 | orchestrator | 2026-03-13 01:06:24 | INFO  | Task bcd8c5af-ddf8-4abd-abe0-4cc27c79d7d7 is in state STARTED 2026-03-13 01:06:24.161556 | orchestrator | 2026-03-13 01:06:24 | INFO  | Task 0de40343-0f24-4115-875e-6b817ae7ffee is in state STARTED 2026-03-13 01:06:24.161628 | orchestrator | 2026-03-13 01:06:24 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:06:27.203028 | orchestrator | 2026-03-13 01:06:27 | INFO  | Task c4285381-6e9a-4f3a-9d44-8287f6045e4f is in state STARTED 2026-03-13 01:06:27.204542 | orchestrator | 2026-03-13 01:06:27 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:06:27.206197 | orchestrator | 2026-03-13 01:06:27 | INFO  | Task bcd8c5af-ddf8-4abd-abe0-4cc27c79d7d7 is in state STARTED 2026-03-13 01:06:27.208088 | orchestrator | 2026-03-13 01:06:27 | INFO  | Task 0de40343-0f24-4115-875e-6b817ae7ffee is in state STARTED 2026-03-13 01:06:27.208121 | orchestrator | 2026-03-13 01:06:27 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:06:30.242255 | orchestrator | 2026-03-13 01:06:30 | INFO  | Task c4285381-6e9a-4f3a-9d44-8287f6045e4f is in state STARTED 2026-03-13 01:06:30.242679 | orchestrator | 2026-03-13 01:06:30 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:06:30.245166 | orchestrator | 2026-03-13 01:06:30 | INFO  | Task bcd8c5af-ddf8-4abd-abe0-4cc27c79d7d7 is in state STARTED 2026-03-13 01:06:30.245578 | orchestrator | 2026-03-13 01:06:30 | INFO  | Task 0de40343-0f24-4115-875e-6b817ae7ffee is in state STARTED 2026-03-13 01:06:30.245606 | orchestrator | 2026-03-13 01:06:30 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:06:33.282766 | orchestrator | 2026-03-13 01:06:33 | INFO  | Task c4285381-6e9a-4f3a-9d44-8287f6045e4f is in state STARTED 2026-03-13 01:06:33.284698 | orchestrator | 2026-03-13 01:06:33 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:06:33.287556 | orchestrator | 2026-03-13 01:06:33 | INFO  | Task bcd8c5af-ddf8-4abd-abe0-4cc27c79d7d7 is in state STARTED 2026-03-13 01:06:33.288847 | orchestrator | 2026-03-13 01:06:33 | INFO  | Task 0de40343-0f24-4115-875e-6b817ae7ffee is in state STARTED 2026-03-13 01:06:33.288941 | orchestrator | 2026-03-13 01:06:33 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:06:36.338002 | orchestrator | 2026-03-13 01:06:36 | INFO  | Task c4285381-6e9a-4f3a-9d44-8287f6045e4f is in state STARTED 2026-03-13 01:06:36.340682 | orchestrator | 2026-03-13 01:06:36 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:06:36.342929 | orchestrator | 2026-03-13 01:06:36 | INFO  | Task bcd8c5af-ddf8-4abd-abe0-4cc27c79d7d7 is in state STARTED 2026-03-13 01:06:36.345117 | orchestrator | 2026-03-13 01:06:36 | INFO  | Task 0de40343-0f24-4115-875e-6b817ae7ffee is in state STARTED 2026-03-13 01:06:36.345166 | orchestrator | 2026-03-13 01:06:36 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:06:39.390349 | orchestrator | 2026-03-13 01:06:39 | INFO  | Task c4285381-6e9a-4f3a-9d44-8287f6045e4f is in state STARTED 2026-03-13 01:06:39.393123 | orchestrator | 2026-03-13 01:06:39 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:06:39.395205 | orchestrator | 2026-03-13 01:06:39 | INFO  | Task bcd8c5af-ddf8-4abd-abe0-4cc27c79d7d7 is in state SUCCESS 2026-03-13 01:06:39.396743 | orchestrator | 2026-03-13 01:06:39.396786 | orchestrator | 2026-03-13 01:06:39.396794 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-13 01:06:39.396801 | orchestrator | 2026-03-13 01:06:39.396807 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-13 01:06:39.396814 | orchestrator | Friday 13 March 2026 01:03:51 +0000 (0:00:00.194) 0:00:00.194 ********** 2026-03-13 01:06:39.396820 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:06:39.396835 | orchestrator | ok: [testbed-node-1] 2026-03-13 01:06:39.396842 | orchestrator | ok: [testbed-node-2] 2026-03-13 01:06:39.396848 | orchestrator | 2026-03-13 01:06:39.396854 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-13 01:06:39.396876 | orchestrator | Friday 13 March 2026 01:03:51 +0000 (0:00:00.232) 0:00:00.426 ********** 2026-03-13 01:06:39.396883 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-03-13 01:06:39.396890 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-03-13 01:06:39.396896 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-03-13 01:06:39.396902 | orchestrator | 2026-03-13 01:06:39.396909 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-03-13 01:06:39.396915 | orchestrator | 2026-03-13 01:06:39.396922 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-13 01:06:39.396927 | orchestrator | Friday 13 March 2026 01:03:51 +0000 (0:00:00.386) 0:00:00.813 ********** 2026-03-13 01:06:39.396930 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 01:06:39.396935 | orchestrator | 2026-03-13 01:06:39.396938 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-03-13 01:06:39.396943 | orchestrator | Friday 13 March 2026 01:03:52 +0000 (0:00:00.463) 0:00:01.277 ********** 2026-03-13 01:06:39.396947 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-03-13 01:06:39.396951 | orchestrator | 2026-03-13 01:06:39.396955 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-03-13 01:06:39.397073 | orchestrator | Friday 13 March 2026 01:03:55 +0000 (0:00:03.529) 0:00:04.806 ********** 2026-03-13 01:06:39.397085 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-03-13 01:06:39.397090 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-03-13 01:06:39.397094 | orchestrator | 2026-03-13 01:06:39.397098 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-03-13 01:06:39.397102 | orchestrator | Friday 13 March 2026 01:04:01 +0000 (0:00:05.924) 0:00:10.731 ********** 2026-03-13 01:06:39.397106 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-13 01:06:39.397110 | orchestrator | 2026-03-13 01:06:39.397114 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-03-13 01:06:39.397146 | orchestrator | Friday 13 March 2026 01:04:04 +0000 (0:00:02.898) 0:00:13.630 ********** 2026-03-13 01:06:39.397152 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-03-13 01:06:39.397156 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-13 01:06:39.397159 | orchestrator | 2026-03-13 01:06:39.397163 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-03-13 01:06:39.397167 | orchestrator | Friday 13 March 2026 01:04:07 +0000 (0:00:03.361) 0:00:16.992 ********** 2026-03-13 01:06:39.397171 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-13 01:06:39.397175 | orchestrator | 2026-03-13 01:06:39.397179 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-03-13 01:06:39.397182 | orchestrator | Friday 13 March 2026 01:04:10 +0000 (0:00:03.010) 0:00:20.002 ********** 2026-03-13 01:06:39.397186 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-03-13 01:06:39.397190 | orchestrator | 2026-03-13 01:06:39.397194 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-03-13 01:06:39.397197 | orchestrator | Friday 13 March 2026 01:04:14 +0000 (0:00:03.241) 0:00:23.244 ********** 2026-03-13 01:06:39.397219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-13 01:06:39.397232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-13 01:06:39.397239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-13 01:06:39.397246 | orchestrator | 2026-03-13 01:06:39.397250 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-13 01:06:39.397254 | orchestrator | Friday 13 March 2026 01:04:19 +0000 (0:00:05.484) 0:00:28.729 ********** 2026-03-13 01:06:39.397259 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 01:06:39.397263 | orchestrator | 2026-03-13 01:06:39.397266 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-03-13 01:06:39.397274 | orchestrator | Friday 13 March 2026 01:04:20 +0000 (0:00:00.703) 0:00:29.432 ********** 2026-03-13 01:06:39.397278 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:06:39.397292 | orchestrator | changed: [testbed-node-1] 2026-03-13 01:06:39.397300 | orchestrator | changed: [testbed-node-2] 2026-03-13 01:06:39.397304 | orchestrator | 2026-03-13 01:06:39.397308 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-03-13 01:06:39.397314 | orchestrator | Friday 13 March 2026 01:04:24 +0000 (0:00:04.406) 0:00:33.838 ********** 2026-03-13 01:06:39.397321 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-13 01:06:39.397327 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-13 01:06:39.397335 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-13 01:06:39.397344 | orchestrator | 2026-03-13 01:06:39.397350 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-03-13 01:06:39.397355 | orchestrator | Friday 13 March 2026 01:04:26 +0000 (0:00:01.973) 0:00:35.811 ********** 2026-03-13 01:06:39.397361 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-13 01:06:39.397367 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-13 01:06:39.397373 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-13 01:06:39.397379 | orchestrator | 2026-03-13 01:06:39.397385 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-03-13 01:06:39.397390 | orchestrator | Friday 13 March 2026 01:04:28 +0000 (0:00:01.237) 0:00:37.049 ********** 2026-03-13 01:06:39.397396 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:06:39.397402 | orchestrator | ok: [testbed-node-1] 2026-03-13 01:06:39.397409 | orchestrator | ok: [testbed-node-2] 2026-03-13 01:06:39.397415 | orchestrator | 2026-03-13 01:06:39.397421 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-03-13 01:06:39.397428 | orchestrator | Friday 13 March 2026 01:04:28 +0000 (0:00:00.958) 0:00:38.007 ********** 2026-03-13 01:06:39.397434 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:06:39.397440 | orchestrator | 2026-03-13 01:06:39.397446 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-03-13 01:06:39.397453 | orchestrator | Friday 13 March 2026 01:04:29 +0000 (0:00:00.091) 0:00:38.098 ********** 2026-03-13 01:06:39.397459 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:06:39.397465 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:06:39.397471 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:06:39.397475 | orchestrator | 2026-03-13 01:06:39.397479 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-13 01:06:39.397482 | orchestrator | Friday 13 March 2026 01:04:29 +0000 (0:00:00.247) 0:00:38.346 ********** 2026-03-13 01:06:39.397490 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 01:06:39.397494 | orchestrator | 2026-03-13 01:06:39.397498 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-03-13 01:06:39.397502 | orchestrator | Friday 13 March 2026 01:04:30 +0000 (0:00:01.050) 0:00:39.396 ********** 2026-03-13 01:06:39.397509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-13 01:06:39.397519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-13 01:06:39.397524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-13 01:06:39.397531 | orchestrator | 2026-03-13 01:06:39.397535 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-03-13 01:06:39.397538 | orchestrator | Friday 13 March 2026 01:04:35 +0000 (0:00:04.931) 0:00:44.328 ********** 2026-03-13 01:06:39.397548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-13 01:06:39.397553 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:06:39.397557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-13 01:06:39.397564 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:06:39.397573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-13 01:06:39.397577 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:06:39.397581 | orchestrator | 2026-03-13 01:06:39.397585 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-03-13 01:06:39.397589 | orchestrator | Friday 13 March 2026 01:04:39 +0000 (0:00:04.343) 0:00:48.672 ********** 2026-03-13 01:06:39.397593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-13 01:06:39.397599 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:06:39.397605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-13 01:06:39.397609 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:06:39.397617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-13 01:06:39.397626 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:06:39.397630 | orchestrator | 2026-03-13 01:06:39.397634 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-03-13 01:06:39.397637 | orchestrator | Friday 13 March 2026 01:04:43 +0000 (0:00:03.714) 0:00:52.386 ********** 2026-03-13 01:06:39.397641 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:06:39.397645 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:06:39.397649 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:06:39.397652 | orchestrator | 2026-03-13 01:06:39.397656 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-03-13 01:06:39.397660 | orchestrator | Friday 13 March 2026 01:04:46 +0000 (0:00:03.562) 0:00:55.949 ********** 2026-03-13 01:06:39.397665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-13 01:06:39.397674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-13 01:06:39.397681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-13 01:06:39.397685 | orchestrator | 2026-03-13 01:06:39.397736 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-03-13 01:06:39.397742 | orchestrator | Friday 13 March 2026 01:04:51 +0000 (0:00:04.674) 0:01:00.624 ********** 2026-03-13 01:06:39.397745 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:06:39.397749 | orchestrator | changed: [testbed-node-2] 2026-03-13 01:06:39.397753 | orchestrator | changed: [testbed-node-1] 2026-03-13 01:06:39.397757 | orchestrator | 2026-03-13 01:06:39.397760 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-03-13 01:06:39.397764 | orchestrator | Friday 13 March 2026 01:05:02 +0000 (0:00:11.006) 0:01:11.630 ********** 2026-03-13 01:06:39.397768 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:06:39.397771 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:06:39.397775 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:06:39.397779 | orchestrator | 2026-03-13 01:06:39.397785 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-03-13 01:06:39.397789 | orchestrator | Friday 13 March 2026 01:05:06 +0000 (0:00:04.373) 0:01:16.003 ********** 2026-03-13 01:06:39.397793 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:06:39.397798 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:06:39.397802 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:06:39.397807 | orchestrator | 2026-03-13 01:06:39.397811 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-03-13 01:06:39.397816 | orchestrator | Friday 13 March 2026 01:05:11 +0000 (0:00:04.221) 0:01:20.224 ********** 2026-03-13 01:06:39.397820 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:06:39.397827 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:06:39.397831 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:06:39.397836 | orchestrator | 2026-03-13 01:06:39.397840 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-03-13 01:06:39.397845 | orchestrator | Friday 13 March 2026 01:05:14 +0000 (0:00:03.459) 0:01:23.684 ********** 2026-03-13 01:06:39.397852 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:06:39.397856 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:06:39.397861 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:06:39.397865 | orchestrator | 2026-03-13 01:06:39.397870 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-03-13 01:06:39.397874 | orchestrator | Friday 13 March 2026 01:05:18 +0000 (0:00:03.445) 0:01:27.129 ********** 2026-03-13 01:06:39.397879 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:06:39.397883 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:06:39.397887 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:06:39.397890 | orchestrator | 2026-03-13 01:06:39.397894 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-03-13 01:06:39.397898 | orchestrator | Friday 13 March 2026 01:05:18 +0000 (0:00:00.254) 0:01:27.384 ********** 2026-03-13 01:06:39.397902 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-13 01:06:39.397905 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:06:39.397909 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-13 01:06:39.397913 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:06:39.397917 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-13 01:06:39.397920 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:06:39.397924 | orchestrator | 2026-03-13 01:06:39.397928 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-03-13 01:06:39.397932 | orchestrator | Friday 13 March 2026 01:05:21 +0000 (0:00:02.991) 0:01:30.375 ********** 2026-03-13 01:06:39.397936 | orchestrator | changed: [testbed-node-2] 2026-03-13 01:06:39.397940 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:06:39.397944 | orchestrator | changed: [testbed-node-1] 2026-03-13 01:06:39.397947 | orchestrator | 2026-03-13 01:06:39.397951 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-03-13 01:06:39.397955 | orchestrator | Friday 13 March 2026 01:05:27 +0000 (0:00:05.835) 0:01:36.211 ********** 2026-03-13 01:06:39.397959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-13 01:06:39.397969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-13 01:06:39.397976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-13 01:06:39.397980 | orchestrator | 2026-03-13 01:06:39.397984 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-13 01:06:39.397988 | orchestrator | Friday 13 March 2026 01:05:30 +0000 (0:00:03.465) 0:01:39.676 ********** 2026-03-13 01:06:39.397992 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:06:39.397996 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:06:39.397999 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:06:39.398003 | orchestrator | 2026-03-13 01:06:39.398007 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-03-13 01:06:39.398011 | orchestrator | Friday 13 March 2026 01:05:30 +0000 (0:00:00.272) 0:01:39.948 ********** 2026-03-13 01:06:39.398038 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:06:39.398042 | orchestrator | 2026-03-13 01:06:39.398049 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-03-13 01:06:39.398053 | orchestrator | Friday 13 March 2026 01:05:32 +0000 (0:00:02.029) 0:01:41.978 ********** 2026-03-13 01:06:39.398056 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:06:39.398060 | orchestrator | 2026-03-13 01:06:39.398064 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-03-13 01:06:39.398068 | orchestrator | Friday 13 March 2026 01:05:35 +0000 (0:00:02.190) 0:01:44.169 ********** 2026-03-13 01:06:39.398072 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:06:39.398075 | orchestrator | 2026-03-13 01:06:39.398079 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-03-13 01:06:39.398085 | orchestrator | Friday 13 March 2026 01:05:37 +0000 (0:00:02.270) 0:01:46.439 ********** 2026-03-13 01:06:39.398091 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:06:39.398098 | orchestrator | 2026-03-13 01:06:39.398107 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-03-13 01:06:39.398114 | orchestrator | Friday 13 March 2026 01:06:03 +0000 (0:00:26.077) 0:02:12.517 ********** 2026-03-13 01:06:39.398120 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:06:39.398127 | orchestrator | 2026-03-13 01:06:39.398134 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-13 01:06:39.398140 | orchestrator | Friday 13 March 2026 01:06:05 +0000 (0:00:02.374) 0:02:14.892 ********** 2026-03-13 01:06:39.398146 | orchestrator | 2026-03-13 01:06:39.398157 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-13 01:06:39.398164 | orchestrator | Friday 13 March 2026 01:06:05 +0000 (0:00:00.057) 0:02:14.949 ********** 2026-03-13 01:06:39.398170 | orchestrator | 2026-03-13 01:06:39.398175 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-13 01:06:39.398182 | orchestrator | Friday 13 March 2026 01:06:06 +0000 (0:00:00.120) 0:02:15.070 ********** 2026-03-13 01:06:39.398188 | orchestrator | 2026-03-13 01:06:39.398194 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-03-13 01:06:39.398200 | orchestrator | Friday 13 March 2026 01:06:06 +0000 (0:00:00.127) 0:02:15.198 ********** 2026-03-13 01:06:39.398207 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:06:39.398213 | orchestrator | changed: [testbed-node-1] 2026-03-13 01:06:39.398220 | orchestrator | changed: [testbed-node-2] 2026-03-13 01:06:39.398226 | orchestrator | 2026-03-13 01:06:39.398232 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 01:06:39.398239 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-13 01:06:39.398247 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-13 01:06:39.398253 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-13 01:06:39.398260 | orchestrator | 2026-03-13 01:06:39.398266 | orchestrator | 2026-03-13 01:06:39.398273 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 01:06:39.398279 | orchestrator | Friday 13 March 2026 01:06:37 +0000 (0:00:30.980) 0:02:46.179 ********** 2026-03-13 01:06:39.398286 | orchestrator | =============================================================================== 2026-03-13 01:06:39.398292 | orchestrator | glance : Restart glance-api container ---------------------------------- 30.98s 2026-03-13 01:06:39.398299 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 26.08s 2026-03-13 01:06:39.398306 | orchestrator | glance : Copying over glance-api.conf ---------------------------------- 11.01s 2026-03-13 01:06:39.398312 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 5.93s 2026-03-13 01:06:39.398320 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 5.84s 2026-03-13 01:06:39.398334 | orchestrator | glance : Ensuring config directories exist ------------------------------ 5.48s 2026-03-13 01:06:39.398342 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.93s 2026-03-13 01:06:39.398349 | orchestrator | glance : Copying over config.json files for services -------------------- 4.67s 2026-03-13 01:06:39.398357 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 4.41s 2026-03-13 01:06:39.398364 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 4.37s 2026-03-13 01:06:39.398371 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 4.34s 2026-03-13 01:06:39.398379 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 4.22s 2026-03-13 01:06:39.398386 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.71s 2026-03-13 01:06:39.398394 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.56s 2026-03-13 01:06:39.398401 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.53s 2026-03-13 01:06:39.398408 | orchestrator | glance : Check glance containers ---------------------------------------- 3.46s 2026-03-13 01:06:39.398416 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 3.46s 2026-03-13 01:06:39.398423 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 3.45s 2026-03-13 01:06:39.398430 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.36s 2026-03-13 01:06:39.398437 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.24s 2026-03-13 01:06:39.398443 | orchestrator | 2026-03-13 01:06:39 | INFO  | Task 935638ce-73e8-44d1-b5ce-a7e03329ee45 is in state STARTED 2026-03-13 01:06:39.398958 | orchestrator | 2026-03-13 01:06:39 | INFO  | Task 0de40343-0f24-4115-875e-6b817ae7ffee is in state STARTED 2026-03-13 01:06:39.398984 | orchestrator | 2026-03-13 01:06:39 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:06:42.449242 | orchestrator | 2026-03-13 01:06:42 | INFO  | Task c4285381-6e9a-4f3a-9d44-8287f6045e4f is in state STARTED 2026-03-13 01:06:42.450682 | orchestrator | 2026-03-13 01:06:42 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:06:42.452116 | orchestrator | 2026-03-13 01:06:42 | INFO  | Task 935638ce-73e8-44d1-b5ce-a7e03329ee45 is in state STARTED 2026-03-13 01:06:42.453505 | orchestrator | 2026-03-13 01:06:42 | INFO  | Task 0de40343-0f24-4115-875e-6b817ae7ffee is in state STARTED 2026-03-13 01:06:42.453586 | orchestrator | 2026-03-13 01:06:42 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:06:45.502757 | orchestrator | 2026-03-13 01:06:45 | INFO  | Task c4285381-6e9a-4f3a-9d44-8287f6045e4f is in state STARTED 2026-03-13 01:06:45.504537 | orchestrator | 2026-03-13 01:06:45 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:06:45.506291 | orchestrator | 2026-03-13 01:06:45 | INFO  | Task 935638ce-73e8-44d1-b5ce-a7e03329ee45 is in state STARTED 2026-03-13 01:06:45.508054 | orchestrator | 2026-03-13 01:06:45 | INFO  | Task 0de40343-0f24-4115-875e-6b817ae7ffee is in state STARTED 2026-03-13 01:06:45.508409 | orchestrator | 2026-03-13 01:06:45 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:06:48.550177 | orchestrator | 2026-03-13 01:06:48 | INFO  | Task c4285381-6e9a-4f3a-9d44-8287f6045e4f is in state STARTED 2026-03-13 01:06:48.552597 | orchestrator | 2026-03-13 01:06:48 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:06:48.555513 | orchestrator | 2026-03-13 01:06:48 | INFO  | Task 935638ce-73e8-44d1-b5ce-a7e03329ee45 is in state STARTED 2026-03-13 01:06:48.558587 | orchestrator | 2026-03-13 01:06:48 | INFO  | Task 0de40343-0f24-4115-875e-6b817ae7ffee is in state SUCCESS 2026-03-13 01:06:48.559930 | orchestrator | 2026-03-13 01:06:48.559972 | orchestrator | 2026-03-13 01:06:48.559981 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-13 01:06:48.559987 | orchestrator | 2026-03-13 01:06:48.559992 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-13 01:06:48.559997 | orchestrator | Friday 13 March 2026 01:03:53 +0000 (0:00:00.191) 0:00:00.191 ********** 2026-03-13 01:06:48.560003 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:06:48.560008 | orchestrator | ok: [testbed-node-1] 2026-03-13 01:06:48.560013 | orchestrator | ok: [testbed-node-2] 2026-03-13 01:06:48.560018 | orchestrator | 2026-03-13 01:06:48.560022 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-13 01:06:48.560028 | orchestrator | Friday 13 March 2026 01:03:53 +0000 (0:00:00.239) 0:00:00.430 ********** 2026-03-13 01:06:48.560034 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-03-13 01:06:48.560039 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-03-13 01:06:48.560045 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-03-13 01:06:48.560050 | orchestrator | 2026-03-13 01:06:48.560055 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-03-13 01:06:48.560061 | orchestrator | 2026-03-13 01:06:48.560064 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-13 01:06:48.560067 | orchestrator | Friday 13 March 2026 01:03:53 +0000 (0:00:00.367) 0:00:00.798 ********** 2026-03-13 01:06:48.560071 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 01:06:48.560074 | orchestrator | 2026-03-13 01:06:48.560078 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-03-13 01:06:48.560081 | orchestrator | Friday 13 March 2026 01:03:54 +0000 (0:00:00.481) 0:00:01.280 ********** 2026-03-13 01:06:48.560084 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-03-13 01:06:48.560087 | orchestrator | 2026-03-13 01:06:48.560090 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-03-13 01:06:48.560093 | orchestrator | Friday 13 March 2026 01:03:57 +0000 (0:00:03.328) 0:00:04.608 ********** 2026-03-13 01:06:48.560097 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-03-13 01:06:48.560100 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-03-13 01:06:48.560103 | orchestrator | 2026-03-13 01:06:48.560106 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-03-13 01:06:48.560109 | orchestrator | Friday 13 March 2026 01:04:03 +0000 (0:00:06.089) 0:00:10.697 ********** 2026-03-13 01:06:48.560113 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-13 01:06:48.560116 | orchestrator | 2026-03-13 01:06:48.560119 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-03-13 01:06:48.560122 | orchestrator | Friday 13 March 2026 01:04:06 +0000 (0:00:02.906) 0:00:13.603 ********** 2026-03-13 01:06:48.560125 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-03-13 01:06:48.560128 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-13 01:06:48.560131 | orchestrator | 2026-03-13 01:06:48.560134 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-03-13 01:06:48.560137 | orchestrator | Friday 13 March 2026 01:04:10 +0000 (0:00:03.536) 0:00:17.140 ********** 2026-03-13 01:06:48.560140 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-13 01:06:48.560144 | orchestrator | 2026-03-13 01:06:48.560148 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-03-13 01:06:48.560153 | orchestrator | Friday 13 March 2026 01:04:13 +0000 (0:00:03.071) 0:00:20.211 ********** 2026-03-13 01:06:48.560157 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-03-13 01:06:48.560162 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-03-13 01:06:48.560179 | orchestrator | 2026-03-13 01:06:48.560192 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-03-13 01:06:48.560197 | orchestrator | Friday 13 March 2026 01:04:19 +0000 (0:00:06.426) 0:00:26.638 ********** 2026-03-13 01:06:48.560205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-13 01:06:48.560221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-13 01:06:48.560228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-13 01:06:48.560234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-13 01:06:48.560280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-13 01:06:48.560292 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-13 01:06:48.560296 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-13 01:06:48.560303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-13 01:06:48.560307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-13 01:06:48.560310 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-13 01:06:48.560313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-13 01:06:48.560321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-13 01:06:48.560364 | orchestrator | 2026-03-13 01:06:48.560370 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-13 01:06:48.560375 | orchestrator | Friday 13 March 2026 01:04:22 +0000 (0:00:02.765) 0:00:29.403 ********** 2026-03-13 01:06:48.560380 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:06:48.560385 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:06:48.560390 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:06:48.560395 | orchestrator | 2026-03-13 01:06:48.560399 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-13 01:06:48.560404 | orchestrator | Friday 13 March 2026 01:04:23 +0000 (0:00:00.505) 0:00:29.909 ********** 2026-03-13 01:06:48.560410 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 01:06:48.560415 | orchestrator | 2026-03-13 01:06:48.560419 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-03-13 01:06:48.560738 | orchestrator | Friday 13 March 2026 01:04:23 +0000 (0:00:00.840) 0:00:30.750 ********** 2026-03-13 01:06:48.560788 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-03-13 01:06:48.560795 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-03-13 01:06:48.560800 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-03-13 01:06:48.560805 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-03-13 01:06:48.560809 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-03-13 01:06:48.560813 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-03-13 01:06:48.560816 | orchestrator | 2026-03-13 01:06:48.560819 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-03-13 01:06:48.560823 | orchestrator | Friday 13 March 2026 01:04:25 +0000 (0:00:01.837) 0:00:32.590 ********** 2026-03-13 01:06:48.560829 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-13 01:06:48.560836 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-13 01:06:48.560849 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-13 01:06:48.560853 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-13 01:06:48.560873 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-13 01:06:48.560884 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-13 01:06:48.560889 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-13 01:06:48.560899 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-13 01:06:48.560906 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-13 01:06:48.560925 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-13 01:06:48.560931 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-13 01:06:48.560935 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-13 01:06:48.560943 | orchestrator | 2026-03-13 01:06:48.560947 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-03-13 01:06:48.560953 | orchestrator | Friday 13 March 2026 01:04:29 +0000 (0:00:03.513) 0:00:36.104 ********** 2026-03-13 01:06:48.560959 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-13 01:06:48.560964 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-13 01:06:48.560969 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-13 01:06:48.560974 | orchestrator | 2026-03-13 01:06:48.560979 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-03-13 01:06:48.560984 | orchestrator | Friday 13 March 2026 01:04:31 +0000 (0:00:02.622) 0:00:38.727 ********** 2026-03-13 01:06:48.560989 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-03-13 01:06:48.560993 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-03-13 01:06:48.560998 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-03-13 01:06:48.561003 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-03-13 01:06:48.561008 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-03-13 01:06:48.561013 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-03-13 01:06:48.561018 | orchestrator | 2026-03-13 01:06:48.561026 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-03-13 01:06:48.561031 | orchestrator | Friday 13 March 2026 01:04:34 +0000 (0:00:02.822) 0:00:41.549 ********** 2026-03-13 01:06:48.561035 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-03-13 01:06:48.561038 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-03-13 01:06:48.561041 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-03-13 01:06:48.561044 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-03-13 01:06:48.561048 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-03-13 01:06:48.561053 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-03-13 01:06:48.561060 | orchestrator | 2026-03-13 01:06:48.561067 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-03-13 01:06:48.561072 | orchestrator | Friday 13 March 2026 01:04:35 +0000 (0:00:01.133) 0:00:42.683 ********** 2026-03-13 01:06:48.561076 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:06:48.561081 | orchestrator | 2026-03-13 01:06:48.561086 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-03-13 01:06:48.561091 | orchestrator | Friday 13 March 2026 01:04:36 +0000 (0:00:00.327) 0:00:43.010 ********** 2026-03-13 01:06:48.561096 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:06:48.561101 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:06:48.561106 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:06:48.561111 | orchestrator | 2026-03-13 01:06:48.561116 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-13 01:06:48.561121 | orchestrator | Friday 13 March 2026 01:04:36 +0000 (0:00:00.418) 0:00:43.429 ********** 2026-03-13 01:06:48.561127 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 01:06:48.561132 | orchestrator | 2026-03-13 01:06:48.561137 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-03-13 01:06:48.561159 | orchestrator | Friday 13 March 2026 01:04:37 +0000 (0:00:01.082) 0:00:44.511 ********** 2026-03-13 01:06:48.561166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-13 01:06:48.561178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-13 01:06:48.561184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-13 01:06:48.561197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-13 01:06:48.561203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-13 01:06:48.561213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-13 01:06:48.561223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-13 01:06:48.561229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-13 01:06:48.561235 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-13 01:06:48.561243 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-13 01:06:48.561249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-13 01:06:48.561262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-13 01:06:48.561268 | orchestrator | 2026-03-13 01:06:48.561272 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-03-13 01:06:48.561277 | orchestrator | Friday 13 March 2026 01:04:41 +0000 (0:00:04.046) 0:00:48.558 ********** 2026-03-13 01:06:48.561283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-13 01:06:48.561288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-13 01:06:48.561297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-13 01:06:48.561302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-13 01:06:48.561308 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:06:48.561318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-13 01:06:48.561328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-13 01:06:48.561335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-13 01:06:48.561340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-13 01:06:48.561348 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:06:48.561357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-13 01:06:48.561363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-13 01:06:48.561377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-13 01:06:48.561383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-13 01:06:48.561389 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:06:48.561394 | orchestrator | 2026-03-13 01:06:48.561400 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-03-13 01:06:48.561405 | orchestrator | Friday 13 March 2026 01:04:42 +0000 (0:00:00.719) 0:00:49.277 ********** 2026-03-13 01:06:48.561411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-13 01:06:48.561417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-13 01:06:48.561422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-13 01:06:48.561435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-13 01:06:48.561440 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:06:48.561445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-13 01:06:48.561473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-13 01:06:48.561479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-13 01:06:48.561487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-13 01:06:48.561496 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:06:48.561502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-13 01:06:48.561511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-13 01:06:48.561517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-13 01:06:48.561523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-13 01:06:48.561528 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:06:48.561533 | orchestrator | 2026-03-13 01:06:48.561539 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-03-13 01:06:48.561545 | orchestrator | Friday 13 March 2026 01:04:43 +0000 (0:00:01.024) 0:00:50.302 ********** 2026-03-13 01:06:48.561552 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-13 01:06:48.561562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-13 01:06:48.561572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-13 01:06:48.561577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-13 01:06:48.561583 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-13 01:06:48.561589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-13 01:06:48.561596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-13 01:06:48.561607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-13 01:06:48.561615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-13 01:06:48.561621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-13 01:06:48.561627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-13 01:06:48.561633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-13 01:06:48.561641 | orchestrator | 2026-03-13 01:06:48.561647 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-03-13 01:06:48.561653 | orchestrator | Friday 13 March 2026 01:04:47 +0000 (0:00:04.405) 0:00:54.708 ********** 2026-03-13 01:06:48.561660 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-13 01:06:48.561666 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-13 01:06:48.561671 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-13 01:06:48.561740 | orchestrator | 2026-03-13 01:06:48.561744 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-03-13 01:06:48.561748 | orchestrator | Friday 13 March 2026 01:04:50 +0000 (0:00:02.305) 0:00:57.014 ********** 2026-03-13 01:06:48.561757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-13 01:06:48.561763 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-13 01:06:48.561769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-13 01:06:48.561774 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-13 01:06:48.561786 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-13 01:06:48.561792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-13 01:06:48.561801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-13 01:06:48.561807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-13 01:06:48.561813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-13 01:06:48.561819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-13 01:06:48.561832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-13 01:06:48.561838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-13 01:06:48.561844 | orchestrator | 2026-03-13 01:06:48.561847 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-03-13 01:06:48.561850 | orchestrator | Friday 13 March 2026 01:05:06 +0000 (0:00:16.775) 0:01:13.789 ********** 2026-03-13 01:06:48.561854 | orchestrator | changed: [testbed-node-1] 2026-03-13 01:06:48.561857 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:06:48.561860 | orchestrator | changed: [testbed-node-2] 2026-03-13 01:06:48.561863 | orchestrator | 2026-03-13 01:06:48.561866 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-03-13 01:06:48.561871 | orchestrator | Friday 13 March 2026 01:05:08 +0000 (0:00:01.714) 0:01:15.503 ********** 2026-03-13 01:06:48.561875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-13 01:06:48.561878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-13 01:06:48.561883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-13 01:06:48.561889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-13 01:06:48.561892 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:06:48.561895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-13 01:06:48.561902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-13 01:06:48.561905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-13 01:06:48.561908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-13 01:06:48.561914 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:06:48.561917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-13 01:06:48.561922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-13 01:06:48.561925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-13 01:06:48.561931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-13 01:06:48.561934 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:06:48.561937 | orchestrator | 2026-03-13 01:06:48.561940 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-03-13 01:06:48.561943 | orchestrator | Friday 13 March 2026 01:05:09 +0000 (0:00:00.923) 0:01:16.427 ********** 2026-03-13 01:06:48.561946 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:06:48.561950 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:06:48.561953 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:06:48.561958 | orchestrator | 2026-03-13 01:06:48.561963 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-03-13 01:06:48.561968 | orchestrator | Friday 13 March 2026 01:05:09 +0000 (0:00:00.352) 0:01:16.780 ********** 2026-03-13 01:06:48.561973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-13 01:06:48.561980 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-13 01:06:48.561985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-13 01:06:48.561993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-13 01:06:48.561998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-13 01:06:48.562005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-13 01:06:48.562011 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-13 01:06:48.562051 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-13 01:06:48.562057 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-13 01:06:48.562065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-13 01:06:48.562072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-13 01:06:48.562080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-13 01:06:48.562083 | orchestrator | 2026-03-13 01:06:48.562086 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-13 01:06:48.562089 | orchestrator | Friday 13 March 2026 01:05:13 +0000 (0:00:03.810) 0:01:20.590 ********** 2026-03-13 01:06:48.562092 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:06:48.562096 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:06:48.562111 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:06:48.562114 | orchestrator | 2026-03-13 01:06:48.562117 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-03-13 01:06:48.562120 | orchestrator | Friday 13 March 2026 01:05:14 +0000 (0:00:00.411) 0:01:21.001 ********** 2026-03-13 01:06:48.562123 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:06:48.562126 | orchestrator | 2026-03-13 01:06:48.562130 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-03-13 01:06:48.562133 | orchestrator | Friday 13 March 2026 01:05:16 +0000 (0:00:02.223) 0:01:23.225 ********** 2026-03-13 01:06:48.562136 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:06:48.562139 | orchestrator | 2026-03-13 01:06:48.562142 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-03-13 01:06:48.562145 | orchestrator | Friday 13 March 2026 01:05:18 +0000 (0:00:02.039) 0:01:25.265 ********** 2026-03-13 01:06:48.562148 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:06:48.562151 | orchestrator | 2026-03-13 01:06:48.562154 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-13 01:06:48.562157 | orchestrator | Friday 13 March 2026 01:05:36 +0000 (0:00:18.577) 0:01:43.843 ********** 2026-03-13 01:06:48.562160 | orchestrator | 2026-03-13 01:06:48.562163 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-13 01:06:48.562166 | orchestrator | Friday 13 March 2026 01:05:37 +0000 (0:00:00.063) 0:01:43.907 ********** 2026-03-13 01:06:48.562169 | orchestrator | 2026-03-13 01:06:48.562172 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-13 01:06:48.562178 | orchestrator | Friday 13 March 2026 01:05:37 +0000 (0:00:00.060) 0:01:43.968 ********** 2026-03-13 01:06:48.562181 | orchestrator | 2026-03-13 01:06:48.562184 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-03-13 01:06:48.562187 | orchestrator | Friday 13 March 2026 01:05:37 +0000 (0:00:00.062) 0:01:44.030 ********** 2026-03-13 01:06:48.562190 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:06:48.562193 | orchestrator | changed: [testbed-node-2] 2026-03-13 01:06:48.562196 | orchestrator | changed: [testbed-node-1] 2026-03-13 01:06:48.562200 | orchestrator | 2026-03-13 01:06:48.562203 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-03-13 01:06:48.562206 | orchestrator | Friday 13 March 2026 01:06:04 +0000 (0:00:27.209) 0:02:11.240 ********** 2026-03-13 01:06:48.562209 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:06:48.562212 | orchestrator | changed: [testbed-node-2] 2026-03-13 01:06:48.562215 | orchestrator | changed: [testbed-node-1] 2026-03-13 01:06:48.562218 | orchestrator | 2026-03-13 01:06:48.562223 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-03-13 01:06:48.562227 | orchestrator | Friday 13 March 2026 01:06:13 +0000 (0:00:09.387) 0:02:20.627 ********** 2026-03-13 01:06:48.562230 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:06:48.562233 | orchestrator | changed: [testbed-node-1] 2026-03-13 01:06:48.562236 | orchestrator | changed: [testbed-node-2] 2026-03-13 01:06:48.562239 | orchestrator | 2026-03-13 01:06:48.562242 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-03-13 01:06:48.562245 | orchestrator | Friday 13 March 2026 01:06:37 +0000 (0:00:23.277) 0:02:43.905 ********** 2026-03-13 01:06:48.562248 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:06:48.562251 | orchestrator | changed: [testbed-node-2] 2026-03-13 01:06:48.562254 | orchestrator | changed: [testbed-node-1] 2026-03-13 01:06:48.562257 | orchestrator | 2026-03-13 01:06:48.562260 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-03-13 01:06:48.562265 | orchestrator | Friday 13 March 2026 01:06:47 +0000 (0:00:10.392) 0:02:54.298 ********** 2026-03-13 01:06:48.562268 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:06:48.562272 | orchestrator | 2026-03-13 01:06:48.562275 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 01:06:48.562280 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-13 01:06:48.562285 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-13 01:06:48.562290 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-13 01:06:48.562306 | orchestrator | 2026-03-13 01:06:48.562310 | orchestrator | 2026-03-13 01:06:48.562314 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 01:06:48.562319 | orchestrator | Friday 13 March 2026 01:06:47 +0000 (0:00:00.237) 0:02:54.535 ********** 2026-03-13 01:06:48.562323 | orchestrator | =============================================================================== 2026-03-13 01:06:48.562328 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 27.21s 2026-03-13 01:06:48.562333 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 23.28s 2026-03-13 01:06:48.562337 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 18.58s 2026-03-13 01:06:48.562341 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 16.78s 2026-03-13 01:06:48.562346 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 10.39s 2026-03-13 01:06:48.562350 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 9.39s 2026-03-13 01:06:48.562355 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 6.43s 2026-03-13 01:06:48.562361 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.09s 2026-03-13 01:06:48.562366 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.41s 2026-03-13 01:06:48.562370 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.05s 2026-03-13 01:06:48.562375 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.81s 2026-03-13 01:06:48.562379 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.54s 2026-03-13 01:06:48.562383 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.51s 2026-03-13 01:06:48.562388 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.33s 2026-03-13 01:06:48.562393 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.07s 2026-03-13 01:06:48.562397 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 2.91s 2026-03-13 01:06:48.562402 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.82s 2026-03-13 01:06:48.562411 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.77s 2026-03-13 01:06:48.562417 | orchestrator | cinder : Copy over Ceph keyring files for cinder-volume ----------------- 2.62s 2026-03-13 01:06:48.562422 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 2.31s 2026-03-13 01:06:48.562427 | orchestrator | 2026-03-13 01:06:48 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:06:51.601127 | orchestrator | 2026-03-13 01:06:51 | INFO  | Task c4285381-6e9a-4f3a-9d44-8287f6045e4f is in state STARTED 2026-03-13 01:06:51.603048 | orchestrator | 2026-03-13 01:06:51 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:06:51.604950 | orchestrator | 2026-03-13 01:06:51 | INFO  | Task 935638ce-73e8-44d1-b5ce-a7e03329ee45 is in state STARTED 2026-03-13 01:06:51.605055 | orchestrator | 2026-03-13 01:06:51 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:06:54.643686 | orchestrator | 2026-03-13 01:06:54 | INFO  | Task c4285381-6e9a-4f3a-9d44-8287f6045e4f is in state STARTED 2026-03-13 01:06:54.643732 | orchestrator | 2026-03-13 01:06:54 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:06:54.643737 | orchestrator | 2026-03-13 01:06:54 | INFO  | Task 935638ce-73e8-44d1-b5ce-a7e03329ee45 is in state STARTED 2026-03-13 01:06:54.643740 | orchestrator | 2026-03-13 01:06:54 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:06:57.683329 | orchestrator | 2026-03-13 01:06:57 | INFO  | Task c4285381-6e9a-4f3a-9d44-8287f6045e4f is in state STARTED 2026-03-13 01:06:57.685464 | orchestrator | 2026-03-13 01:06:57 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:06:57.688178 | orchestrator | 2026-03-13 01:06:57 | INFO  | Task 935638ce-73e8-44d1-b5ce-a7e03329ee45 is in state STARTED 2026-03-13 01:06:57.688224 | orchestrator | 2026-03-13 01:06:57 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:07:00.733122 | orchestrator | 2026-03-13 01:07:00 | INFO  | Task c4285381-6e9a-4f3a-9d44-8287f6045e4f is in state STARTED 2026-03-13 01:07:00.735212 | orchestrator | 2026-03-13 01:07:00 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:07:00.736605 | orchestrator | 2026-03-13 01:07:00 | INFO  | Task 935638ce-73e8-44d1-b5ce-a7e03329ee45 is in state STARTED 2026-03-13 01:07:00.736878 | orchestrator | 2026-03-13 01:07:00 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:07:03.780853 | orchestrator | 2026-03-13 01:07:03 | INFO  | Task c4285381-6e9a-4f3a-9d44-8287f6045e4f is in state STARTED 2026-03-13 01:07:03.781095 | orchestrator | 2026-03-13 01:07:03 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:07:03.783104 | orchestrator | 2026-03-13 01:07:03 | INFO  | Task 935638ce-73e8-44d1-b5ce-a7e03329ee45 is in state STARTED 2026-03-13 01:07:03.783138 | orchestrator | 2026-03-13 01:07:03 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:07:06.826691 | orchestrator | 2026-03-13 01:07:06 | INFO  | Task c4285381-6e9a-4f3a-9d44-8287f6045e4f is in state STARTED 2026-03-13 01:07:06.827237 | orchestrator | 2026-03-13 01:07:06 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:07:06.828298 | orchestrator | 2026-03-13 01:07:06 | INFO  | Task 935638ce-73e8-44d1-b5ce-a7e03329ee45 is in state STARTED 2026-03-13 01:07:06.828320 | orchestrator | 2026-03-13 01:07:06 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:07:09.871739 | orchestrator | 2026-03-13 01:07:09 | INFO  | Task c4285381-6e9a-4f3a-9d44-8287f6045e4f is in state STARTED 2026-03-13 01:07:09.874310 | orchestrator | 2026-03-13 01:07:09 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:07:09.876704 | orchestrator | 2026-03-13 01:07:09 | INFO  | Task 935638ce-73e8-44d1-b5ce-a7e03329ee45 is in state STARTED 2026-03-13 01:07:09.876762 | orchestrator | 2026-03-13 01:07:09 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:07:12.917270 | orchestrator | 2026-03-13 01:07:12 | INFO  | Task c4285381-6e9a-4f3a-9d44-8287f6045e4f is in state STARTED 2026-03-13 01:07:12.917898 | orchestrator | 2026-03-13 01:07:12 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:07:12.920473 | orchestrator | 2026-03-13 01:07:12 | INFO  | Task 935638ce-73e8-44d1-b5ce-a7e03329ee45 is in state STARTED 2026-03-13 01:07:12.920532 | orchestrator | 2026-03-13 01:07:12 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:07:15.953584 | orchestrator | 2026-03-13 01:07:15 | INFO  | Task c4285381-6e9a-4f3a-9d44-8287f6045e4f is in state STARTED 2026-03-13 01:07:15.954773 | orchestrator | 2026-03-13 01:07:15 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:07:15.956248 | orchestrator | 2026-03-13 01:07:15 | INFO  | Task 935638ce-73e8-44d1-b5ce-a7e03329ee45 is in state STARTED 2026-03-13 01:07:15.956291 | orchestrator | 2026-03-13 01:07:15 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:07:18.993517 | orchestrator | 2026-03-13 01:07:18 | INFO  | Task c4285381-6e9a-4f3a-9d44-8287f6045e4f is in state STARTED 2026-03-13 01:07:18.995898 | orchestrator | 2026-03-13 01:07:18 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:07:18.998472 | orchestrator | 2026-03-13 01:07:19 | INFO  | Task 935638ce-73e8-44d1-b5ce-a7e03329ee45 is in state STARTED 2026-03-13 01:07:18.998541 | orchestrator | 2026-03-13 01:07:19 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:07:22.044836 | orchestrator | 2026-03-13 01:07:22 | INFO  | Task c4285381-6e9a-4f3a-9d44-8287f6045e4f is in state STARTED 2026-03-13 01:07:22.044895 | orchestrator | 2026-03-13 01:07:22 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:07:22.045318 | orchestrator | 2026-03-13 01:07:22 | INFO  | Task 935638ce-73e8-44d1-b5ce-a7e03329ee45 is in state STARTED 2026-03-13 01:07:22.045332 | orchestrator | 2026-03-13 01:07:22 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:07:25.078137 | orchestrator | 2026-03-13 01:07:25 | INFO  | Task c4285381-6e9a-4f3a-9d44-8287f6045e4f is in state STARTED 2026-03-13 01:07:25.079276 | orchestrator | 2026-03-13 01:07:25 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:07:25.080790 | orchestrator | 2026-03-13 01:07:25 | INFO  | Task 935638ce-73e8-44d1-b5ce-a7e03329ee45 is in state STARTED 2026-03-13 01:07:25.080827 | orchestrator | 2026-03-13 01:07:25 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:07:28.118086 | orchestrator | 2026-03-13 01:07:28 | INFO  | Task c4285381-6e9a-4f3a-9d44-8287f6045e4f is in state STARTED 2026-03-13 01:07:28.120294 | orchestrator | 2026-03-13 01:07:28 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:07:28.121997 | orchestrator | 2026-03-13 01:07:28 | INFO  | Task 935638ce-73e8-44d1-b5ce-a7e03329ee45 is in state STARTED 2026-03-13 01:07:28.122064 | orchestrator | 2026-03-13 01:07:28 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:07:31.156968 | orchestrator | 2026-03-13 01:07:31 | INFO  | Task c4285381-6e9a-4f3a-9d44-8287f6045e4f is in state STARTED 2026-03-13 01:07:31.159016 | orchestrator | 2026-03-13 01:07:31 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:07:31.160432 | orchestrator | 2026-03-13 01:07:31 | INFO  | Task 935638ce-73e8-44d1-b5ce-a7e03329ee45 is in state STARTED 2026-03-13 01:07:31.160474 | orchestrator | 2026-03-13 01:07:31 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:07:34.198444 | orchestrator | 2026-03-13 01:07:34 | INFO  | Task c4285381-6e9a-4f3a-9d44-8287f6045e4f is in state STARTED 2026-03-13 01:07:34.200169 | orchestrator | 2026-03-13 01:07:34 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:07:34.202416 | orchestrator | 2026-03-13 01:07:34 | INFO  | Task 935638ce-73e8-44d1-b5ce-a7e03329ee45 is in state STARTED 2026-03-13 01:07:34.203997 | orchestrator | 2026-03-13 01:07:34 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:07:37.247951 | orchestrator | 2026-03-13 01:07:37 | INFO  | Task c4285381-6e9a-4f3a-9d44-8287f6045e4f is in state STARTED 2026-03-13 01:07:37.250338 | orchestrator | 2026-03-13 01:07:37 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:07:37.252967 | orchestrator | 2026-03-13 01:07:37 | INFO  | Task 935638ce-73e8-44d1-b5ce-a7e03329ee45 is in state STARTED 2026-03-13 01:07:37.253016 | orchestrator | 2026-03-13 01:07:37 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:07:40.312187 | orchestrator | 2026-03-13 01:07:40 | INFO  | Task c4285381-6e9a-4f3a-9d44-8287f6045e4f is in state STARTED 2026-03-13 01:07:40.314252 | orchestrator | 2026-03-13 01:07:40 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:07:40.318860 | orchestrator | 2026-03-13 01:07:40 | INFO  | Task 935638ce-73e8-44d1-b5ce-a7e03329ee45 is in state STARTED 2026-03-13 01:07:40.318920 | orchestrator | 2026-03-13 01:07:40 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:07:43.370281 | orchestrator | 2026-03-13 01:07:43 | INFO  | Task c4285381-6e9a-4f3a-9d44-8287f6045e4f is in state STARTED 2026-03-13 01:07:43.373155 | orchestrator | 2026-03-13 01:07:43 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:07:43.374501 | orchestrator | 2026-03-13 01:07:43 | INFO  | Task 935638ce-73e8-44d1-b5ce-a7e03329ee45 is in state STARTED 2026-03-13 01:07:43.374541 | orchestrator | 2026-03-13 01:07:43 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:07:46.425203 | orchestrator | 2026-03-13 01:07:46 | INFO  | Task c4285381-6e9a-4f3a-9d44-8287f6045e4f is in state STARTED 2026-03-13 01:07:46.425284 | orchestrator | 2026-03-13 01:07:46 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:07:46.428957 | orchestrator | 2026-03-13 01:07:46 | INFO  | Task 935638ce-73e8-44d1-b5ce-a7e03329ee45 is in state STARTED 2026-03-13 01:07:46.429015 | orchestrator | 2026-03-13 01:07:46 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:07:49.477409 | orchestrator | 2026-03-13 01:07:49 | INFO  | Task c4285381-6e9a-4f3a-9d44-8287f6045e4f is in state STARTED 2026-03-13 01:07:49.484188 | orchestrator | 2026-03-13 01:07:49 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:07:49.491744 | orchestrator | 2026-03-13 01:07:49 | INFO  | Task 935638ce-73e8-44d1-b5ce-a7e03329ee45 is in state STARTED 2026-03-13 01:07:49.491817 | orchestrator | 2026-03-13 01:07:49 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:07:52.532886 | orchestrator | 2026-03-13 01:07:52 | INFO  | Task c4285381-6e9a-4f3a-9d44-8287f6045e4f is in state STARTED 2026-03-13 01:07:52.535316 | orchestrator | 2026-03-13 01:07:52 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:07:52.537674 | orchestrator | 2026-03-13 01:07:52 | INFO  | Task 935638ce-73e8-44d1-b5ce-a7e03329ee45 is in state STARTED 2026-03-13 01:07:52.537754 | orchestrator | 2026-03-13 01:07:52 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:07:55.589914 | orchestrator | 2026-03-13 01:07:55 | INFO  | Task c4285381-6e9a-4f3a-9d44-8287f6045e4f is in state STARTED 2026-03-13 01:07:55.592839 | orchestrator | 2026-03-13 01:07:55 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:07:55.594975 | orchestrator | 2026-03-13 01:07:55 | INFO  | Task 935638ce-73e8-44d1-b5ce-a7e03329ee45 is in state STARTED 2026-03-13 01:07:55.595027 | orchestrator | 2026-03-13 01:07:55 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:07:58.638500 | orchestrator | 2026-03-13 01:07:58 | INFO  | Task c4285381-6e9a-4f3a-9d44-8287f6045e4f is in state STARTED 2026-03-13 01:07:58.638935 | orchestrator | 2026-03-13 01:07:58 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:07:58.640301 | orchestrator | 2026-03-13 01:07:58 | INFO  | Task 935638ce-73e8-44d1-b5ce-a7e03329ee45 is in state STARTED 2026-03-13 01:07:58.640403 | orchestrator | 2026-03-13 01:07:58 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:08:01.700396 | orchestrator | 2026-03-13 01:08:01 | INFO  | Task c4285381-6e9a-4f3a-9d44-8287f6045e4f is in state STARTED 2026-03-13 01:08:01.701139 | orchestrator | 2026-03-13 01:08:01 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:08:01.702378 | orchestrator | 2026-03-13 01:08:01 | INFO  | Task 935638ce-73e8-44d1-b5ce-a7e03329ee45 is in state STARTED 2026-03-13 01:08:01.702415 | orchestrator | 2026-03-13 01:08:01 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:08:04.743435 | orchestrator | 2026-03-13 01:08:04 | INFO  | Task c4285381-6e9a-4f3a-9d44-8287f6045e4f is in state STARTED 2026-03-13 01:08:04.743506 | orchestrator | 2026-03-13 01:08:04 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:08:04.746197 | orchestrator | 2026-03-13 01:08:04 | INFO  | Task 935638ce-73e8-44d1-b5ce-a7e03329ee45 is in state STARTED 2026-03-13 01:08:04.746242 | orchestrator | 2026-03-13 01:08:04 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:08:07.786108 | orchestrator | 2026-03-13 01:08:07 | INFO  | Task c4285381-6e9a-4f3a-9d44-8287f6045e4f is in state STARTED 2026-03-13 01:08:07.786828 | orchestrator | 2026-03-13 01:08:07 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:08:07.789101 | orchestrator | 2026-03-13 01:08:07 | INFO  | Task 935638ce-73e8-44d1-b5ce-a7e03329ee45 is in state STARTED 2026-03-13 01:08:07.789140 | orchestrator | 2026-03-13 01:08:07 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:08:10.832897 | orchestrator | 2026-03-13 01:08:10 | INFO  | Task c4285381-6e9a-4f3a-9d44-8287f6045e4f is in state STARTED 2026-03-13 01:08:10.833269 | orchestrator | 2026-03-13 01:08:10 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:08:10.834154 | orchestrator | 2026-03-13 01:08:10 | INFO  | Task 935638ce-73e8-44d1-b5ce-a7e03329ee45 is in state STARTED 2026-03-13 01:08:10.834180 | orchestrator | 2026-03-13 01:08:10 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:08:13.883105 | orchestrator | 2026-03-13 01:08:13 | INFO  | Task c4285381-6e9a-4f3a-9d44-8287f6045e4f is in state STARTED 2026-03-13 01:08:13.884086 | orchestrator | 2026-03-13 01:08:13 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:08:13.885465 | orchestrator | 2026-03-13 01:08:13 | INFO  | Task 935638ce-73e8-44d1-b5ce-a7e03329ee45 is in state STARTED 2026-03-13 01:08:13.885827 | orchestrator | 2026-03-13 01:08:13 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:08:16.927060 | orchestrator | 2026-03-13 01:08:16 | INFO  | Task c4285381-6e9a-4f3a-9d44-8287f6045e4f is in state STARTED 2026-03-13 01:08:16.929613 | orchestrator | 2026-03-13 01:08:16 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:08:16.931201 | orchestrator | 2026-03-13 01:08:16 | INFO  | Task 935638ce-73e8-44d1-b5ce-a7e03329ee45 is in state STARTED 2026-03-13 01:08:16.931229 | orchestrator | 2026-03-13 01:08:16 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:08:19.971791 | orchestrator | 2026-03-13 01:08:19 | INFO  | Task c4285381-6e9a-4f3a-9d44-8287f6045e4f is in state STARTED 2026-03-13 01:08:19.973617 | orchestrator | 2026-03-13 01:08:19 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:08:19.974999 | orchestrator | 2026-03-13 01:08:19 | INFO  | Task 935638ce-73e8-44d1-b5ce-a7e03329ee45 is in state STARTED 2026-03-13 01:08:19.975144 | orchestrator | 2026-03-13 01:08:19 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:08:23.021970 | orchestrator | 2026-03-13 01:08:23 | INFO  | Task c4285381-6e9a-4f3a-9d44-8287f6045e4f is in state STARTED 2026-03-13 01:08:23.022099 | orchestrator | 2026-03-13 01:08:23 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:08:23.023887 | orchestrator | 2026-03-13 01:08:23 | INFO  | Task 935638ce-73e8-44d1-b5ce-a7e03329ee45 is in state STARTED 2026-03-13 01:08:23.023962 | orchestrator | 2026-03-13 01:08:23 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:08:26.059124 | orchestrator | 2026-03-13 01:08:26 | INFO  | Task c4285381-6e9a-4f3a-9d44-8287f6045e4f is in state STARTED 2026-03-13 01:08:26.060285 | orchestrator | 2026-03-13 01:08:26 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:08:26.061791 | orchestrator | 2026-03-13 01:08:26 | INFO  | Task 935638ce-73e8-44d1-b5ce-a7e03329ee45 is in state STARTED 2026-03-13 01:08:26.061824 | orchestrator | 2026-03-13 01:08:26 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:08:29.107217 | orchestrator | 2026-03-13 01:08:29 | INFO  | Task c4285381-6e9a-4f3a-9d44-8287f6045e4f is in state STARTED 2026-03-13 01:08:29.109656 | orchestrator | 2026-03-13 01:08:29 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:08:29.112456 | orchestrator | 2026-03-13 01:08:29 | INFO  | Task 935638ce-73e8-44d1-b5ce-a7e03329ee45 is in state STARTED 2026-03-13 01:08:29.112566 | orchestrator | 2026-03-13 01:08:29 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:08:32.151401 | orchestrator | 2026-03-13 01:08:32 | INFO  | Task c4285381-6e9a-4f3a-9d44-8287f6045e4f is in state SUCCESS 2026-03-13 01:08:32.152392 | orchestrator | 2026-03-13 01:08:32 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:08:32.153833 | orchestrator | 2026-03-13 01:08:32 | INFO  | Task 935638ce-73e8-44d1-b5ce-a7e03329ee45 is in state STARTED 2026-03-13 01:08:32.154286 | orchestrator | 2026-03-13 01:08:32 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:08:32.155942 | orchestrator | 2026-03-13 01:08:32 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:08:35.201119 | orchestrator | 2026-03-13 01:08:35 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:08:35.201409 | orchestrator | 2026-03-13 01:08:35 | INFO  | Task 935638ce-73e8-44d1-b5ce-a7e03329ee45 is in state STARTED 2026-03-13 01:08:35.205066 | orchestrator | 2026-03-13 01:08:35 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:08:35.205120 | orchestrator | 2026-03-13 01:08:35 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:08:38.252461 | orchestrator | 2026-03-13 01:08:38 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:08:38.257522 | orchestrator | 2026-03-13 01:08:38 | INFO  | Task 935638ce-73e8-44d1-b5ce-a7e03329ee45 is in state SUCCESS 2026-03-13 01:08:38.260200 | orchestrator | 2026-03-13 01:08:38.260245 | orchestrator | 2026-03-13 01:08:38.260251 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-13 01:08:38.260255 | orchestrator | 2026-03-13 01:08:38.260259 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-13 01:08:38.260263 | orchestrator | Friday 13 March 2026 01:06:05 +0000 (0:00:00.210) 0:00:00.210 ********** 2026-03-13 01:08:38.260267 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:08:38.260271 | orchestrator | ok: [testbed-node-1] 2026-03-13 01:08:38.260275 | orchestrator | ok: [testbed-node-2] 2026-03-13 01:08:38.260278 | orchestrator | 2026-03-13 01:08:38.260282 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-13 01:08:38.260286 | orchestrator | Friday 13 March 2026 01:06:05 +0000 (0:00:00.285) 0:00:00.495 ********** 2026-03-13 01:08:38.260290 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2026-03-13 01:08:38.260294 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2026-03-13 01:08:38.260298 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2026-03-13 01:08:38.260301 | orchestrator | 2026-03-13 01:08:38.260305 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2026-03-13 01:08:38.260309 | orchestrator | 2026-03-13 01:08:38.260312 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2026-03-13 01:08:38.260316 | orchestrator | Friday 13 March 2026 01:06:06 +0000 (0:00:00.701) 0:00:01.197 ********** 2026-03-13 01:08:38.260319 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:08:38.260323 | orchestrator | ok: [testbed-node-1] 2026-03-13 01:08:38.260327 | orchestrator | ok: [testbed-node-2] 2026-03-13 01:08:38.260330 | orchestrator | 2026-03-13 01:08:38.260334 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 01:08:38.260338 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 01:08:38.260342 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 01:08:38.260346 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 01:08:38.260350 | orchestrator | 2026-03-13 01:08:38.260354 | orchestrator | 2026-03-13 01:08:38.260357 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 01:08:38.260361 | orchestrator | Friday 13 March 2026 01:08:30 +0000 (0:02:23.831) 0:02:25.029 ********** 2026-03-13 01:08:38.260365 | orchestrator | =============================================================================== 2026-03-13 01:08:38.260368 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 143.83s 2026-03-13 01:08:38.260372 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.70s 2026-03-13 01:08:38.260376 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2026-03-13 01:08:38.260379 | orchestrator | 2026-03-13 01:08:38.260383 | orchestrator | 2026-03-13 01:08:38.260386 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-13 01:08:38.260390 | orchestrator | 2026-03-13 01:08:38.260394 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-13 01:08:38.260397 | orchestrator | Friday 13 March 2026 01:06:42 +0000 (0:00:00.249) 0:00:00.249 ********** 2026-03-13 01:08:38.260400 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:08:38.260403 | orchestrator | ok: [testbed-node-1] 2026-03-13 01:08:38.260418 | orchestrator | ok: [testbed-node-2] 2026-03-13 01:08:38.260421 | orchestrator | 2026-03-13 01:08:38.260424 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-13 01:08:38.260427 | orchestrator | Friday 13 March 2026 01:06:42 +0000 (0:00:00.297) 0:00:00.547 ********** 2026-03-13 01:08:38.260430 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-03-13 01:08:38.260434 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-03-13 01:08:38.260437 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-03-13 01:08:38.260440 | orchestrator | 2026-03-13 01:08:38.260443 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-03-13 01:08:38.260446 | orchestrator | 2026-03-13 01:08:38.260449 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-13 01:08:38.260452 | orchestrator | Friday 13 March 2026 01:06:42 +0000 (0:00:00.419) 0:00:00.966 ********** 2026-03-13 01:08:38.260455 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 01:08:38.260458 | orchestrator | 2026-03-13 01:08:38.260461 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-03-13 01:08:38.260464 | orchestrator | Friday 13 March 2026 01:06:43 +0000 (0:00:00.519) 0:00:01.486 ********** 2026-03-13 01:08:38.260469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-13 01:08:38.260481 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-13 01:08:38.260495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-13 01:08:38.260501 | orchestrator | 2026-03-13 01:08:38.260507 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-03-13 01:08:38.260511 | orchestrator | Friday 13 March 2026 01:06:43 +0000 (0:00:00.709) 0:00:02.195 ********** 2026-03-13 01:08:38.260516 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-03-13 01:08:38.260522 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-03-13 01:08:38.260527 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-13 01:08:38.260532 | orchestrator | 2026-03-13 01:08:38.260537 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-13 01:08:38.260546 | orchestrator | Friday 13 March 2026 01:06:44 +0000 (0:00:00.887) 0:00:03.083 ********** 2026-03-13 01:08:38.260551 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 01:08:38.260556 | orchestrator | 2026-03-13 01:08:38.260561 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-03-13 01:08:38.260565 | orchestrator | Friday 13 March 2026 01:06:45 +0000 (0:00:00.734) 0:00:03.817 ********** 2026-03-13 01:08:38.260568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-13 01:08:38.260572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-13 01:08:38.260575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-13 01:08:38.260578 | orchestrator | 2026-03-13 01:08:38.260581 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-03-13 01:08:38.260587 | orchestrator | Friday 13 March 2026 01:06:46 +0000 (0:00:01.342) 0:00:05.160 ********** 2026-03-13 01:08:38.260590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-13 01:08:38.260594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-13 01:08:38.260600 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:08:38.260603 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:08:38.260606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-13 01:08:38.260609 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:08:38.260613 | orchestrator | 2026-03-13 01:08:38.260618 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-03-13 01:08:38.260623 | orchestrator | Friday 13 March 2026 01:06:47 +0000 (0:00:00.376) 0:00:05.537 ********** 2026-03-13 01:08:38.260626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-13 01:08:38.260630 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:08:38.260633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-13 01:08:38.260636 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:08:38.260642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-13 01:08:38.260646 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:08:38.260649 | orchestrator | 2026-03-13 01:08:38.260652 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-03-13 01:08:38.260655 | orchestrator | Friday 13 March 2026 01:06:47 +0000 (0:00:00.681) 0:00:06.219 ********** 2026-03-13 01:08:38.260658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-13 01:08:38.260664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-13 01:08:38.260668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-13 01:08:38.260671 | orchestrator | 2026-03-13 01:08:38.260674 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-03-13 01:08:38.260677 | orchestrator | Friday 13 March 2026 01:06:49 +0000 (0:00:01.201) 0:00:07.420 ********** 2026-03-13 01:08:38.260680 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-13 01:08:38.260730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-13 01:08:38.260737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-13 01:08:38.260743 | orchestrator | 2026-03-13 01:08:38.260747 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-03-13 01:08:38.260750 | orchestrator | Friday 13 March 2026 01:06:50 +0000 (0:00:01.319) 0:00:08.740 ********** 2026-03-13 01:08:38.260753 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:08:38.260756 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:08:38.260759 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:08:38.260762 | orchestrator | 2026-03-13 01:08:38.260765 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-03-13 01:08:38.260768 | orchestrator | Friday 13 March 2026 01:06:50 +0000 (0:00:00.381) 0:00:09.121 ********** 2026-03-13 01:08:38.260771 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-13 01:08:38.260774 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-13 01:08:38.260777 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-13 01:08:38.260780 | orchestrator | 2026-03-13 01:08:38.260783 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-03-13 01:08:38.260787 | orchestrator | Friday 13 March 2026 01:06:52 +0000 (0:00:01.109) 0:00:10.230 ********** 2026-03-13 01:08:38.260792 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-13 01:08:38.260797 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-13 01:08:38.260802 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-13 01:08:38.260810 | orchestrator | 2026-03-13 01:08:38.260815 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-03-13 01:08:38.260820 | orchestrator | Friday 13 March 2026 01:06:53 +0000 (0:00:01.085) 0:00:11.315 ********** 2026-03-13 01:08:38.260825 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-13 01:08:38.260829 | orchestrator | 2026-03-13 01:08:38.260834 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-03-13 01:08:38.260839 | orchestrator | Friday 13 March 2026 01:06:53 +0000 (0:00:00.770) 0:00:12.086 ********** 2026-03-13 01:08:38.260843 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-03-13 01:08:38.260849 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-03-13 01:08:38.260853 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:08:38.260859 | orchestrator | ok: [testbed-node-1] 2026-03-13 01:08:38.260864 | orchestrator | ok: [testbed-node-2] 2026-03-13 01:08:38.260869 | orchestrator | 2026-03-13 01:08:38.260873 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-03-13 01:08:38.260878 | orchestrator | Friday 13 March 2026 01:06:54 +0000 (0:00:00.653) 0:00:12.740 ********** 2026-03-13 01:08:38.260884 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:08:38.260889 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:08:38.260893 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:08:38.260896 | orchestrator | 2026-03-13 01:08:38.260899 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-03-13 01:08:38.260902 | orchestrator | Friday 13 March 2026 01:06:55 +0000 (0:00:00.562) 0:00:13.302 ********** 2026-03-13 01:08:38.260906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1327603, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.2750435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1327603, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.2750435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1327603, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.2750435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1327659, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.2819433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1327659, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.2819433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1327659, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.2819433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261463 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1327749, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.2970824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1327749, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.2970824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1327749, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.2970824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1327649, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.2791655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261514 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1327649, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.2791655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1327649, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.2791655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261520 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1327753, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.2993155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1327753, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.2993155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261542 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1327753, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.2993155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1327626, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.27673, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1327626, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.27673, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1327626, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.27673, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1327695, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.287572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1327695, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.287572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261573 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1327695, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.287572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261577 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1327730, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.2929437, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1327730, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.2929437, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1327730, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.2929437, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1327597, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.273341, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1327597, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.273341, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1327597, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.273341, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1327619, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.2757075, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1327619, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.2757075, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261617 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1327619, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.2757075, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1327655, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.280377, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1327655, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.280377, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1327655, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.280377, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1327712, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.2887201, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261645 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1327712, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.2887201, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261648 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1327712, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.2887201, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1327744, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.2953496, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261655 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1327744, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.2953496, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1327744, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.2953496, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1327641, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.2788665, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1327641, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.2788665, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1327641, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.2788665, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1327723, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.2919436, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1327723, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.2919436, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1327723, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.2919436, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1327766, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.2993155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1327766, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.2993155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1327766, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.2993155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261726 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1327703, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.288454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1327703, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.288454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261735 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1327703, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.288454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1327686, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.2867513, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1327686, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.2867513, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1327686, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.2867513, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1327676, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.2851555, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261754 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1327676, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.2851555, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1327676, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.2851555, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1327718, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.2909436, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1327718, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.2909436, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1327718, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.2909436, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1327670, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.2834265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1327670, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.2834265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1327670, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.2834265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1327738, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.2953496, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1327738, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.2953496, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1327632, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.2778218, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1327738, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.2953496, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1327632, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.2778218, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261839 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1328055, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3651059, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1327632, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.2778218, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261850 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1328055, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3651059, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261858 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1327797, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3099897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261863 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1328055, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3651059, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1327797, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3099897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261878 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1327780, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3030543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261884 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1327797, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3099897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1327780, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3030543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1327832, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3357878, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1327832, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3357878, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1327780, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3030543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261915 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1327770, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3007767, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1327770, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3007767, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1327832, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3357878, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1328013, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.355943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1328013, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.355943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1327770, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3007767, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1327894, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3527484, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1327894, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3527484, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1328013, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.355943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1328020, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3559453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1328020, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3559453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1327894, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3527484, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1328046, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3634782, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1328046, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3634782, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1328020, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3559453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261974 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1328012, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3545692, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1328012, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3545692, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1328046, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3634782, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1327824, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3113973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1327824, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3113973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1328012, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3545692, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.261998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1327791, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3063524, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.262003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1327791, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3063524, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.262007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1327824, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3113973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.262037 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1327822, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3106458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.262042 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1327822, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3106458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.262045 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1327791, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3063524, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.262049 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1327784, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.305019, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.262052 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1327784, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.305019, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.262058 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1327822, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3106458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.262070 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1327827, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3279448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.262077 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1327827, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3279448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.262083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1327784, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.305019, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.262089 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1328033, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3625817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.262094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1328033, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3625817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.262104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1327827, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3279448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.262114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1328025, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3598993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.262120 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1328025, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3598993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.262126 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1328033, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3625817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.262132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1327772, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3010046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.262138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1327772, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3010046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.262147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1328025, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3598993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.262154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1327775, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3026192, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.262159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1327775, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3026192, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.262162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1327772, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3010046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.262166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1328006, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.35422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.262172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1328006, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.35422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.262179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1327775, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3026192, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.262188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1328023, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3569455, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.262194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1328023, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3569455, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.262199 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1328006, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.35422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.262204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1328023, 'dev': 118, 'nlink': 1, 'atime': 1773360140.0, 'mtime': 1773360140.0, 'ctime': 1773361112.3569455, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-13 01:08:38.262210 | orchestrator | 2026-03-13 01:08:38.262216 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-03-13 01:08:38.262222 | orchestrator | Friday 13 March 2026 01:07:33 +0000 (0:00:38.167) 0:00:51.470 ********** 2026-03-13 01:08:38.262228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-13 01:08:38.262240 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-13 01:08:38.262246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-13 01:08:38.262251 | orchestrator | 2026-03-13 01:08:38.262257 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-03-13 01:08:38.262263 | orchestrator | Friday 13 March 2026 01:07:34 +0000 (0:00:00.931) 0:00:52.402 ********** 2026-03-13 01:08:38.262267 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:08:38.262270 | orchestrator | 2026-03-13 01:08:38.262274 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-03-13 01:08:38.262278 | orchestrator | Friday 13 March 2026 01:07:36 +0000 (0:00:02.017) 0:00:54.420 ********** 2026-03-13 01:08:38.262284 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:08:38.262289 | orchestrator | 2026-03-13 01:08:38.262295 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-13 01:08:38.262300 | orchestrator | Friday 13 March 2026 01:07:38 +0000 (0:00:02.108) 0:00:56.529 ********** 2026-03-13 01:08:38.262305 | orchestrator | 2026-03-13 01:08:38.262310 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-13 01:08:38.262316 | orchestrator | Friday 13 March 2026 01:07:38 +0000 (0:00:00.063) 0:00:56.592 ********** 2026-03-13 01:08:38.262321 | orchestrator | 2026-03-13 01:08:38.262327 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-13 01:08:38.262332 | orchestrator | Friday 13 March 2026 01:07:38 +0000 (0:00:00.216) 0:00:56.808 ********** 2026-03-13 01:08:38.262338 | orchestrator | 2026-03-13 01:08:38.262344 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-03-13 01:08:38.262349 | orchestrator | Friday 13 March 2026 01:07:38 +0000 (0:00:00.065) 0:00:56.874 ********** 2026-03-13 01:08:38.262355 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:08:38.262360 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:08:38.262366 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:08:38.262371 | orchestrator | 2026-03-13 01:08:38.262377 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-03-13 01:08:38.262382 | orchestrator | Friday 13 March 2026 01:07:40 +0000 (0:00:01.776) 0:00:58.650 ********** 2026-03-13 01:08:38.262387 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:08:38.262392 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:08:38.262398 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-03-13 01:08:38.262404 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-03-13 01:08:38.262409 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:08:38.262415 | orchestrator | 2026-03-13 01:08:38.262420 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-03-13 01:08:38.262429 | orchestrator | Friday 13 March 2026 01:08:06 +0000 (0:00:26.236) 0:01:24.886 ********** 2026-03-13 01:08:38.262434 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:08:38.262439 | orchestrator | changed: [testbed-node-2] 2026-03-13 01:08:38.262444 | orchestrator | changed: [testbed-node-1] 2026-03-13 01:08:38.262450 | orchestrator | 2026-03-13 01:08:38.262456 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-03-13 01:08:38.262461 | orchestrator | Friday 13 March 2026 01:08:30 +0000 (0:00:23.423) 0:01:48.310 ********** 2026-03-13 01:08:38.262467 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:08:38.262472 | orchestrator | 2026-03-13 01:08:38.262477 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-03-13 01:08:38.262482 | orchestrator | Friday 13 March 2026 01:08:32 +0000 (0:00:02.052) 0:01:50.363 ********** 2026-03-13 01:08:38.262501 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:08:38.262506 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:08:38.262511 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:08:38.262517 | orchestrator | 2026-03-13 01:08:38.262522 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-03-13 01:08:38.262527 | orchestrator | Friday 13 March 2026 01:08:32 +0000 (0:00:00.486) 0:01:50.850 ********** 2026-03-13 01:08:38.262533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-03-13 01:08:38.262542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-03-13 01:08:38.262548 | orchestrator | 2026-03-13 01:08:38.262553 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-03-13 01:08:38.262558 | orchestrator | Friday 13 March 2026 01:08:34 +0000 (0:00:02.061) 0:01:52.912 ********** 2026-03-13 01:08:38.262563 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:08:38.262569 | orchestrator | 2026-03-13 01:08:38.262574 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 01:08:38.262580 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-13 01:08:38.262586 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-13 01:08:38.262591 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-13 01:08:38.262596 | orchestrator | 2026-03-13 01:08:38.262601 | orchestrator | 2026-03-13 01:08:38.262606 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 01:08:38.262611 | orchestrator | Friday 13 March 2026 01:08:34 +0000 (0:00:00.262) 0:01:53.175 ********** 2026-03-13 01:08:38.262622 | orchestrator | =============================================================================== 2026-03-13 01:08:38.262627 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 38.17s 2026-03-13 01:08:38.262637 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 26.24s 2026-03-13 01:08:38.262642 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 23.42s 2026-03-13 01:08:38.262647 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.11s 2026-03-13 01:08:38.262652 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.06s 2026-03-13 01:08:38.262664 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.05s 2026-03-13 01:08:38.262669 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.02s 2026-03-13 01:08:38.262674 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.78s 2026-03-13 01:08:38.262679 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.34s 2026-03-13 01:08:38.262684 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.32s 2026-03-13 01:08:38.262690 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.20s 2026-03-13 01:08:38.262695 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.11s 2026-03-13 01:08:38.262700 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.09s 2026-03-13 01:08:38.262705 | orchestrator | grafana : Check grafana containers -------------------------------------- 0.93s 2026-03-13 01:08:38.262710 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.89s 2026-03-13 01:08:38.262715 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.77s 2026-03-13 01:08:38.262720 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.73s 2026-03-13 01:08:38.262725 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.71s 2026-03-13 01:08:38.262730 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.68s 2026-03-13 01:08:38.262735 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.65s 2026-03-13 01:08:38.262739 | orchestrator | 2026-03-13 01:08:38 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:08:38.262744 | orchestrator | 2026-03-13 01:08:38 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:08:41.304088 | orchestrator | 2026-03-13 01:08:41 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:08:41.306007 | orchestrator | 2026-03-13 01:08:41 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:08:41.306292 | orchestrator | 2026-03-13 01:08:41 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:08:44.342936 | orchestrator | 2026-03-13 01:08:44 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:08:44.343723 | orchestrator | 2026-03-13 01:08:44 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:08:44.343758 | orchestrator | 2026-03-13 01:08:44 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:08:47.382653 | orchestrator | 2026-03-13 01:08:47 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:08:47.385163 | orchestrator | 2026-03-13 01:08:47 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:08:47.385215 | orchestrator | 2026-03-13 01:08:47 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:08:50.433489 | orchestrator | 2026-03-13 01:08:50 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:08:50.434485 | orchestrator | 2026-03-13 01:08:50 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:08:50.434575 | orchestrator | 2026-03-13 01:08:50 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:08:53.471748 | orchestrator | 2026-03-13 01:08:53 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:08:53.473386 | orchestrator | 2026-03-13 01:08:53 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:08:53.473427 | orchestrator | 2026-03-13 01:08:53 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:08:56.516407 | orchestrator | 2026-03-13 01:08:56 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:08:56.520202 | orchestrator | 2026-03-13 01:08:56 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:08:56.520525 | orchestrator | 2026-03-13 01:08:56 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:08:59.569122 | orchestrator | 2026-03-13 01:08:59 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:08:59.569184 | orchestrator | 2026-03-13 01:08:59 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:08:59.569193 | orchestrator | 2026-03-13 01:08:59 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:09:02.627295 | orchestrator | 2026-03-13 01:09:02 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:09:02.629106 | orchestrator | 2026-03-13 01:09:02 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:09:02.629214 | orchestrator | 2026-03-13 01:09:02 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:09:05.671975 | orchestrator | 2026-03-13 01:09:05 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:09:05.676275 | orchestrator | 2026-03-13 01:09:05 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:09:05.676366 | orchestrator | 2026-03-13 01:09:05 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:09:08.713344 | orchestrator | 2026-03-13 01:09:08 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:09:08.714788 | orchestrator | 2026-03-13 01:09:08 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:09:08.714935 | orchestrator | 2026-03-13 01:09:08 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:09:11.756194 | orchestrator | 2026-03-13 01:09:11 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:09:11.758278 | orchestrator | 2026-03-13 01:09:11 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:09:11.758328 | orchestrator | 2026-03-13 01:09:11 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:09:14.806822 | orchestrator | 2026-03-13 01:09:14 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:09:14.809162 | orchestrator | 2026-03-13 01:09:14 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:09:14.809200 | orchestrator | 2026-03-13 01:09:14 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:09:17.840771 | orchestrator | 2026-03-13 01:09:17 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:09:17.841511 | orchestrator | 2026-03-13 01:09:17 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:09:17.841741 | orchestrator | 2026-03-13 01:09:17 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:09:20.878496 | orchestrator | 2026-03-13 01:09:20 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:09:20.879346 | orchestrator | 2026-03-13 01:09:20 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:09:20.879539 | orchestrator | 2026-03-13 01:09:20 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:09:23.918046 | orchestrator | 2026-03-13 01:09:23 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:09:23.921471 | orchestrator | 2026-03-13 01:09:23 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:09:23.922216 | orchestrator | 2026-03-13 01:09:23 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:09:26.967249 | orchestrator | 2026-03-13 01:09:26 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:09:26.968139 | orchestrator | 2026-03-13 01:09:26 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:09:26.968185 | orchestrator | 2026-03-13 01:09:26 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:09:30.023970 | orchestrator | 2026-03-13 01:09:30 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:09:30.027710 | orchestrator | 2026-03-13 01:09:30 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:09:30.027788 | orchestrator | 2026-03-13 01:09:30 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:09:33.066900 | orchestrator | 2026-03-13 01:09:33 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:09:33.067238 | orchestrator | 2026-03-13 01:09:33 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:09:33.067261 | orchestrator | 2026-03-13 01:09:33 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:09:36.101682 | orchestrator | 2026-03-13 01:09:36 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:09:36.102737 | orchestrator | 2026-03-13 01:09:36 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:09:36.102811 | orchestrator | 2026-03-13 01:09:36 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:09:39.137527 | orchestrator | 2026-03-13 01:09:39 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:09:39.137990 | orchestrator | 2026-03-13 01:09:39 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:09:39.138040 | orchestrator | 2026-03-13 01:09:39 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:09:42.174873 | orchestrator | 2026-03-13 01:09:42 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:09:42.176337 | orchestrator | 2026-03-13 01:09:42 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:09:42.176620 | orchestrator | 2026-03-13 01:09:42 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:09:45.216186 | orchestrator | 2026-03-13 01:09:45 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:09:45.216252 | orchestrator | 2026-03-13 01:09:45 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:09:45.216262 | orchestrator | 2026-03-13 01:09:45 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:09:48.247455 | orchestrator | 2026-03-13 01:09:48 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:09:48.249819 | orchestrator | 2026-03-13 01:09:48 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:09:48.249909 | orchestrator | 2026-03-13 01:09:48 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:09:51.284069 | orchestrator | 2026-03-13 01:09:51 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:09:51.285591 | orchestrator | 2026-03-13 01:09:51 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:09:51.285632 | orchestrator | 2026-03-13 01:09:51 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:09:54.328701 | orchestrator | 2026-03-13 01:09:54 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:09:54.330460 | orchestrator | 2026-03-13 01:09:54 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:09:54.330536 | orchestrator | 2026-03-13 01:09:54 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:09:57.368817 | orchestrator | 2026-03-13 01:09:57 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:09:57.370237 | orchestrator | 2026-03-13 01:09:57 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:09:57.370388 | orchestrator | 2026-03-13 01:09:57 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:10:00.405244 | orchestrator | 2026-03-13 01:10:00 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:10:00.405306 | orchestrator | 2026-03-13 01:10:00 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:10:00.405316 | orchestrator | 2026-03-13 01:10:00 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:10:03.443427 | orchestrator | 2026-03-13 01:10:03 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:10:03.444541 | orchestrator | 2026-03-13 01:10:03 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:10:03.444578 | orchestrator | 2026-03-13 01:10:03 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:10:06.490408 | orchestrator | 2026-03-13 01:10:06 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:10:06.492515 | orchestrator | 2026-03-13 01:10:06 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:10:06.492720 | orchestrator | 2026-03-13 01:10:06 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:10:09.534054 | orchestrator | 2026-03-13 01:10:09 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:10:09.536277 | orchestrator | 2026-03-13 01:10:09 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:10:09.536319 | orchestrator | 2026-03-13 01:10:09 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:10:12.573964 | orchestrator | 2026-03-13 01:10:12 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:10:12.574888 | orchestrator | 2026-03-13 01:10:12 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:10:12.574934 | orchestrator | 2026-03-13 01:10:12 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:10:15.625153 | orchestrator | 2026-03-13 01:10:15 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:10:15.627730 | orchestrator | 2026-03-13 01:10:15 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:10:15.627781 | orchestrator | 2026-03-13 01:10:15 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:10:18.671043 | orchestrator | 2026-03-13 01:10:18 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:10:18.673771 | orchestrator | 2026-03-13 01:10:18 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:10:18.673821 | orchestrator | 2026-03-13 01:10:18 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:10:21.713594 | orchestrator | 2026-03-13 01:10:21 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:10:21.716200 | orchestrator | 2026-03-13 01:10:21 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:10:21.716501 | orchestrator | 2026-03-13 01:10:21 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:10:24.764888 | orchestrator | 2026-03-13 01:10:24 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:10:24.766328 | orchestrator | 2026-03-13 01:10:24 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:10:24.766373 | orchestrator | 2026-03-13 01:10:24 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:10:27.807857 | orchestrator | 2026-03-13 01:10:27 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:10:27.807911 | orchestrator | 2026-03-13 01:10:27 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:10:27.807920 | orchestrator | 2026-03-13 01:10:27 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:10:30.845624 | orchestrator | 2026-03-13 01:10:30 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:10:30.847750 | orchestrator | 2026-03-13 01:10:30 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:10:30.847890 | orchestrator | 2026-03-13 01:10:30 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:10:33.891393 | orchestrator | 2026-03-13 01:10:33 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:10:33.893246 | orchestrator | 2026-03-13 01:10:33 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:10:33.893290 | orchestrator | 2026-03-13 01:10:33 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:10:36.943091 | orchestrator | 2026-03-13 01:10:36 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:10:36.946747 | orchestrator | 2026-03-13 01:10:36 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:10:36.947005 | orchestrator | 2026-03-13 01:10:36 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:10:39.993823 | orchestrator | 2026-03-13 01:10:39 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:10:39.995702 | orchestrator | 2026-03-13 01:10:39 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:10:39.995732 | orchestrator | 2026-03-13 01:10:39 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:10:43.042376 | orchestrator | 2026-03-13 01:10:43 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:10:43.043869 | orchestrator | 2026-03-13 01:10:43 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:10:43.043916 | orchestrator | 2026-03-13 01:10:43 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:10:46.092575 | orchestrator | 2026-03-13 01:10:46 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:10:46.093446 | orchestrator | 2026-03-13 01:10:46 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:10:46.093551 | orchestrator | 2026-03-13 01:10:46 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:10:49.133470 | orchestrator | 2026-03-13 01:10:49 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:10:49.134295 | orchestrator | 2026-03-13 01:10:49 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:10:49.134334 | orchestrator | 2026-03-13 01:10:49 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:10:52.174509 | orchestrator | 2026-03-13 01:10:52 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:10:52.177077 | orchestrator | 2026-03-13 01:10:52 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:10:52.177127 | orchestrator | 2026-03-13 01:10:52 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:10:55.219568 | orchestrator | 2026-03-13 01:10:55 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:10:55.221939 | orchestrator | 2026-03-13 01:10:55 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:10:55.222049 | orchestrator | 2026-03-13 01:10:55 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:10:58.264941 | orchestrator | 2026-03-13 01:10:58 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:10:58.266789 | orchestrator | 2026-03-13 01:10:58 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:10:58.266858 | orchestrator | 2026-03-13 01:10:58 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:11:01.378490 | orchestrator | 2026-03-13 01:11:01 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:11:01.379074 | orchestrator | 2026-03-13 01:11:01 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:11:01.379102 | orchestrator | 2026-03-13 01:11:01 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:11:04.414631 | orchestrator | 2026-03-13 01:11:04 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:11:04.417783 | orchestrator | 2026-03-13 01:11:04 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:11:04.418984 | orchestrator | 2026-03-13 01:11:04 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:11:07.449318 | orchestrator | 2026-03-13 01:11:07 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:11:07.449429 | orchestrator | 2026-03-13 01:11:07 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:11:07.449440 | orchestrator | 2026-03-13 01:11:07 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:11:10.483805 | orchestrator | 2026-03-13 01:11:10 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:11:10.484505 | orchestrator | 2026-03-13 01:11:10 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:11:10.484538 | orchestrator | 2026-03-13 01:11:10 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:11:13.532195 | orchestrator | 2026-03-13 01:11:13 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:11:13.532416 | orchestrator | 2026-03-13 01:11:13 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:11:13.532783 | orchestrator | 2026-03-13 01:11:13 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:11:16.577234 | orchestrator | 2026-03-13 01:11:16 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:11:16.580832 | orchestrator | 2026-03-13 01:11:16 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:11:16.580876 | orchestrator | 2026-03-13 01:11:16 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:11:19.625533 | orchestrator | 2026-03-13 01:11:19 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:11:19.626184 | orchestrator | 2026-03-13 01:11:19 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:11:19.626217 | orchestrator | 2026-03-13 01:11:19 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:11:22.665498 | orchestrator | 2026-03-13 01:11:22 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:11:22.666202 | orchestrator | 2026-03-13 01:11:22 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:11:22.666245 | orchestrator | 2026-03-13 01:11:22 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:11:25.692418 | orchestrator | 2026-03-13 01:11:25 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:11:25.692868 | orchestrator | 2026-03-13 01:11:25 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:11:25.692899 | orchestrator | 2026-03-13 01:11:25 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:11:28.731918 | orchestrator | 2026-03-13 01:11:28 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:11:28.733496 | orchestrator | 2026-03-13 01:11:28 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:11:28.733545 | orchestrator | 2026-03-13 01:11:28 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:11:31.775180 | orchestrator | 2026-03-13 01:11:31 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:11:31.775243 | orchestrator | 2026-03-13 01:11:31 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:11:31.775251 | orchestrator | 2026-03-13 01:11:31 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:11:34.813731 | orchestrator | 2026-03-13 01:11:34 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:11:34.816051 | orchestrator | 2026-03-13 01:11:34 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:11:34.816154 | orchestrator | 2026-03-13 01:11:34 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:11:37.862417 | orchestrator | 2026-03-13 01:11:37 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:11:37.864034 | orchestrator | 2026-03-13 01:11:37 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:11:37.864083 | orchestrator | 2026-03-13 01:11:37 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:11:40.907979 | orchestrator | 2026-03-13 01:11:40 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:11:40.909313 | orchestrator | 2026-03-13 01:11:40 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:11:40.909403 | orchestrator | 2026-03-13 01:11:40 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:11:43.945808 | orchestrator | 2026-03-13 01:11:43 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:11:43.945873 | orchestrator | 2026-03-13 01:11:43 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:11:43.945883 | orchestrator | 2026-03-13 01:11:43 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:11:46.976461 | orchestrator | 2026-03-13 01:11:46 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:11:46.977628 | orchestrator | 2026-03-13 01:11:46 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:11:46.977684 | orchestrator | 2026-03-13 01:11:46 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:11:50.009105 | orchestrator | 2026-03-13 01:11:50 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:11:50.009192 | orchestrator | 2026-03-13 01:11:50 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:11:50.009203 | orchestrator | 2026-03-13 01:11:50 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:11:53.029104 | orchestrator | 2026-03-13 01:11:53 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:11:53.029946 | orchestrator | 2026-03-13 01:11:53 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:11:53.029976 | orchestrator | 2026-03-13 01:11:53 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:11:56.059951 | orchestrator | 2026-03-13 01:11:56 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:11:56.065470 | orchestrator | 2026-03-13 01:11:56 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:11:56.065520 | orchestrator | 2026-03-13 01:11:56 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:11:59.105247 | orchestrator | 2026-03-13 01:11:59 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:11:59.107028 | orchestrator | 2026-03-13 01:11:59 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:11:59.107232 | orchestrator | 2026-03-13 01:11:59 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:12:02.148441 | orchestrator | 2026-03-13 01:12:02 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:12:02.149889 | orchestrator | 2026-03-13 01:12:02 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:12:02.149954 | orchestrator | 2026-03-13 01:12:02 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:12:05.197306 | orchestrator | 2026-03-13 01:12:05 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:12:05.199509 | orchestrator | 2026-03-13 01:12:05 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:12:05.199716 | orchestrator | 2026-03-13 01:12:05 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:12:08.242197 | orchestrator | 2026-03-13 01:12:08 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:12:08.243663 | orchestrator | 2026-03-13 01:12:08 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:12:08.243714 | orchestrator | 2026-03-13 01:12:08 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:12:11.294448 | orchestrator | 2026-03-13 01:12:11 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:12:11.295996 | orchestrator | 2026-03-13 01:12:11 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:12:11.296061 | orchestrator | 2026-03-13 01:12:11 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:12:14.351680 | orchestrator | 2026-03-13 01:12:14 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:12:14.353572 | orchestrator | 2026-03-13 01:12:14 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:12:14.353628 | orchestrator | 2026-03-13 01:12:14 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:12:17.395451 | orchestrator | 2026-03-13 01:12:17 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:12:17.397508 | orchestrator | 2026-03-13 01:12:17 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:12:17.397564 | orchestrator | 2026-03-13 01:12:17 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:12:20.447794 | orchestrator | 2026-03-13 01:12:20 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:12:20.451128 | orchestrator | 2026-03-13 01:12:20 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:12:20.451187 | orchestrator | 2026-03-13 01:12:20 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:12:23.497277 | orchestrator | 2026-03-13 01:12:23 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:12:23.499396 | orchestrator | 2026-03-13 01:12:23 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:12:23.499446 | orchestrator | 2026-03-13 01:12:23 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:12:26.536037 | orchestrator | 2026-03-13 01:12:26 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:12:26.538332 | orchestrator | 2026-03-13 01:12:26 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:12:26.538391 | orchestrator | 2026-03-13 01:12:26 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:12:29.592935 | orchestrator | 2026-03-13 01:12:29 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state STARTED 2026-03-13 01:12:29.593829 | orchestrator | 2026-03-13 01:12:29 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:12:29.593964 | orchestrator | 2026-03-13 01:12:29 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:12:32.635651 | orchestrator | 2026-03-13 01:12:32 | INFO  | Task bcf7f5e7-22fc-4f0a-a4bd-51f2ebf357ea is in state SUCCESS 2026-03-13 01:12:32.638414 | orchestrator | 2026-03-13 01:12:32.638567 | orchestrator | 2026-03-13 01:12:32.638580 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-13 01:12:32.638595 | orchestrator | 2026-03-13 01:12:32.638602 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-03-13 01:12:32.638609 | orchestrator | Friday 13 March 2026 01:04:37 +0000 (0:00:00.575) 0:00:00.575 ********** 2026-03-13 01:12:32.638616 | orchestrator | changed: [testbed-manager] 2026-03-13 01:12:32.638623 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:12:32.638641 | orchestrator | changed: [testbed-node-1] 2026-03-13 01:12:32.638666 | orchestrator | changed: [testbed-node-2] 2026-03-13 01:12:32.638676 | orchestrator | changed: [testbed-node-3] 2026-03-13 01:12:32.638682 | orchestrator | changed: [testbed-node-4] 2026-03-13 01:12:32.638689 | orchestrator | changed: [testbed-node-5] 2026-03-13 01:12:32.638695 | orchestrator | 2026-03-13 01:12:32.638701 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-13 01:12:32.638707 | orchestrator | Friday 13 March 2026 01:04:38 +0000 (0:00:00.996) 0:00:01.572 ********** 2026-03-13 01:12:32.638713 | orchestrator | changed: [testbed-manager] 2026-03-13 01:12:32.638720 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:12:32.638726 | orchestrator | changed: [testbed-node-1] 2026-03-13 01:12:32.638733 | orchestrator | changed: [testbed-node-2] 2026-03-13 01:12:32.638740 | orchestrator | changed: [testbed-node-3] 2026-03-13 01:12:32.638746 | orchestrator | changed: [testbed-node-4] 2026-03-13 01:12:32.638780 | orchestrator | changed: [testbed-node-5] 2026-03-13 01:12:32.638786 | orchestrator | 2026-03-13 01:12:32.638792 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-13 01:12:32.638796 | orchestrator | Friday 13 March 2026 01:04:39 +0000 (0:00:00.661) 0:00:02.233 ********** 2026-03-13 01:12:32.638801 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-03-13 01:12:32.638806 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-03-13 01:12:32.638814 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-03-13 01:12:32.638824 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-03-13 01:12:32.638830 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-03-13 01:12:32.638835 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-03-13 01:12:32.638844 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-03-13 01:12:32.638851 | orchestrator | 2026-03-13 01:12:32.638860 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-03-13 01:12:32.638865 | orchestrator | 2026-03-13 01:12:32.638885 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-13 01:12:32.638892 | orchestrator | Friday 13 March 2026 01:04:40 +0000 (0:00:00.968) 0:00:03.202 ********** 2026-03-13 01:12:32.638899 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 01:12:32.638946 | orchestrator | 2026-03-13 01:12:32.638954 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-03-13 01:12:32.638960 | orchestrator | Friday 13 March 2026 01:04:41 +0000 (0:00:01.051) 0:00:04.254 ********** 2026-03-13 01:12:32.638967 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-03-13 01:12:32.638972 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-03-13 01:12:32.638976 | orchestrator | 2026-03-13 01:12:32.638981 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-03-13 01:12:32.638987 | orchestrator | Friday 13 March 2026 01:04:45 +0000 (0:00:03.916) 0:00:08.170 ********** 2026-03-13 01:12:32.638996 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-13 01:12:32.639004 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-13 01:12:32.639010 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:12:32.639016 | orchestrator | 2026-03-13 01:12:32.639022 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-13 01:12:32.639028 | orchestrator | Friday 13 March 2026 01:04:50 +0000 (0:00:05.239) 0:00:13.410 ********** 2026-03-13 01:12:32.639033 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:12:32.639040 | orchestrator | 2026-03-13 01:12:32.639046 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-03-13 01:12:32.639051 | orchestrator | Friday 13 March 2026 01:04:51 +0000 (0:00:00.658) 0:00:14.068 ********** 2026-03-13 01:12:32.639057 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:12:32.639062 | orchestrator | 2026-03-13 01:12:32.639067 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-03-13 01:12:32.639073 | orchestrator | Friday 13 March 2026 01:04:53 +0000 (0:00:01.917) 0:00:15.986 ********** 2026-03-13 01:12:32.639079 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:12:32.639084 | orchestrator | 2026-03-13 01:12:32.639089 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-13 01:12:32.639095 | orchestrator | Friday 13 March 2026 01:04:57 +0000 (0:00:04.499) 0:00:20.485 ********** 2026-03-13 01:12:32.639101 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:12:32.639106 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:12:32.639112 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:12:32.639117 | orchestrator | 2026-03-13 01:12:32.639123 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-13 01:12:32.639129 | orchestrator | Friday 13 March 2026 01:04:58 +0000 (0:00:00.653) 0:00:21.139 ********** 2026-03-13 01:12:32.639134 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:12:32.639140 | orchestrator | 2026-03-13 01:12:32.639146 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-03-13 01:12:32.639151 | orchestrator | Friday 13 March 2026 01:05:31 +0000 (0:00:32.760) 0:00:53.900 ********** 2026-03-13 01:12:32.639157 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:12:32.639162 | orchestrator | 2026-03-13 01:12:32.639168 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-13 01:12:32.639174 | orchestrator | Friday 13 March 2026 01:05:46 +0000 (0:00:14.949) 0:01:08.850 ********** 2026-03-13 01:12:32.639179 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:12:32.639185 | orchestrator | 2026-03-13 01:12:32.639191 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-13 01:12:32.639197 | orchestrator | Friday 13 March 2026 01:05:58 +0000 (0:00:12.194) 0:01:21.044 ********** 2026-03-13 01:12:32.639214 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:12:32.639221 | orchestrator | 2026-03-13 01:12:32.639227 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-03-13 01:12:32.639234 | orchestrator | Friday 13 March 2026 01:05:59 +0000 (0:00:00.860) 0:01:21.905 ********** 2026-03-13 01:12:32.639255 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:12:32.639260 | orchestrator | 2026-03-13 01:12:32.639263 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-13 01:12:32.639293 | orchestrator | Friday 13 March 2026 01:05:59 +0000 (0:00:00.402) 0:01:22.308 ********** 2026-03-13 01:12:32.639298 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 01:12:32.639302 | orchestrator | 2026-03-13 01:12:32.639306 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-13 01:12:32.639310 | orchestrator | Friday 13 March 2026 01:06:00 +0000 (0:00:00.442) 0:01:22.751 ********** 2026-03-13 01:12:32.639314 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:12:32.639317 | orchestrator | 2026-03-13 01:12:32.639321 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-13 01:12:32.639325 | orchestrator | Friday 13 March 2026 01:06:16 +0000 (0:00:16.777) 0:01:39.528 ********** 2026-03-13 01:12:32.639329 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:12:32.639334 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:12:32.639340 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:12:32.639346 | orchestrator | 2026-03-13 01:12:32.639352 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-03-13 01:12:32.639358 | orchestrator | 2026-03-13 01:12:32.639363 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-13 01:12:32.639369 | orchestrator | Friday 13 March 2026 01:06:17 +0000 (0:00:00.270) 0:01:39.798 ********** 2026-03-13 01:12:32.639375 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 01:12:32.639382 | orchestrator | 2026-03-13 01:12:32.639388 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-03-13 01:12:32.639394 | orchestrator | Friday 13 March 2026 01:06:17 +0000 (0:00:00.509) 0:01:40.308 ********** 2026-03-13 01:12:32.639401 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:12:32.639506 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:12:32.639514 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:12:32.639520 | orchestrator | 2026-03-13 01:12:32.639526 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-03-13 01:12:32.639532 | orchestrator | Friday 13 March 2026 01:06:19 +0000 (0:00:02.063) 0:01:42.371 ********** 2026-03-13 01:12:32.639537 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:12:32.639543 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:12:32.639549 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:12:32.639570 | orchestrator | 2026-03-13 01:12:32.639577 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-13 01:12:32.639584 | orchestrator | Friday 13 March 2026 01:06:21 +0000 (0:00:02.188) 0:01:44.560 ********** 2026-03-13 01:12:32.639590 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:12:32.639596 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:12:32.639603 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:12:32.639610 | orchestrator | 2026-03-13 01:12:32.639616 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-13 01:12:32.639622 | orchestrator | Friday 13 March 2026 01:06:22 +0000 (0:00:00.363) 0:01:44.923 ********** 2026-03-13 01:12:32.639628 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-13 01:12:32.639635 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:12:32.639641 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-13 01:12:32.639647 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:12:32.639653 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-13 01:12:32.639659 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-03-13 01:12:32.639665 | orchestrator | 2026-03-13 01:12:32.639672 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-13 01:12:32.639677 | orchestrator | Friday 13 March 2026 01:06:30 +0000 (0:00:07.901) 0:01:52.824 ********** 2026-03-13 01:12:32.639690 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:12:32.639696 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:12:32.639703 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:12:32.639709 | orchestrator | 2026-03-13 01:12:32.639715 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-13 01:12:32.639721 | orchestrator | Friday 13 March 2026 01:06:30 +0000 (0:00:00.315) 0:01:53.140 ********** 2026-03-13 01:12:32.639727 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-13 01:12:32.639734 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:12:32.639740 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-13 01:12:32.639746 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:12:32.639753 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-13 01:12:32.639759 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:12:32.639764 | orchestrator | 2026-03-13 01:12:32.639768 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-13 01:12:32.639772 | orchestrator | Friday 13 March 2026 01:06:31 +0000 (0:00:00.585) 0:01:53.725 ********** 2026-03-13 01:12:32.639776 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:12:32.639779 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:12:32.639783 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:12:32.639786 | orchestrator | 2026-03-13 01:12:32.639790 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-03-13 01:12:32.639794 | orchestrator | Friday 13 March 2026 01:06:31 +0000 (0:00:00.525) 0:01:54.250 ********** 2026-03-13 01:12:32.639798 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:12:32.639801 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:12:32.639805 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:12:32.639809 | orchestrator | 2026-03-13 01:12:32.639812 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-03-13 01:12:32.639816 | orchestrator | Friday 13 March 2026 01:06:32 +0000 (0:00:00.862) 0:01:55.112 ********** 2026-03-13 01:12:32.639820 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:12:32.639823 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:12:32.639833 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:12:32.639837 | orchestrator | 2026-03-13 01:12:32.639841 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-03-13 01:12:32.639845 | orchestrator | Friday 13 March 2026 01:06:34 +0000 (0:00:01.782) 0:01:56.895 ********** 2026-03-13 01:12:32.639848 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:12:32.639852 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:12:32.639856 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:12:32.639859 | orchestrator | 2026-03-13 01:12:32.639863 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-13 01:12:32.639867 | orchestrator | Friday 13 March 2026 01:06:54 +0000 (0:00:20.334) 0:02:17.229 ********** 2026-03-13 01:12:32.639870 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:12:32.639874 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:12:32.639878 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:12:32.639882 | orchestrator | 2026-03-13 01:12:32.639885 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-13 01:12:32.639889 | orchestrator | Friday 13 March 2026 01:07:07 +0000 (0:00:12.665) 0:02:29.895 ********** 2026-03-13 01:12:32.639893 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:12:32.639897 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:12:32.639913 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:12:32.639920 | orchestrator | 2026-03-13 01:12:32.639927 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-03-13 01:12:32.639933 | orchestrator | Friday 13 March 2026 01:07:08 +0000 (0:00:00.922) 0:02:30.818 ********** 2026-03-13 01:12:32.639939 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:12:32.639945 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:12:32.639951 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:12:32.639958 | orchestrator | 2026-03-13 01:12:32.639965 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-03-13 01:12:32.640006 | orchestrator | Friday 13 March 2026 01:07:22 +0000 (0:00:13.886) 0:02:44.704 ********** 2026-03-13 01:12:32.640025 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:12:32.640031 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:12:32.640037 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:12:32.640043 | orchestrator | 2026-03-13 01:12:32.640077 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-13 01:12:32.640085 | orchestrator | Friday 13 March 2026 01:07:23 +0000 (0:00:00.994) 0:02:45.699 ********** 2026-03-13 01:12:32.640096 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:12:32.640103 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:12:32.640136 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:12:32.640143 | orchestrator | 2026-03-13 01:12:32.640150 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-03-13 01:12:32.640156 | orchestrator | 2026-03-13 01:12:32.640163 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-13 01:12:32.640169 | orchestrator | Friday 13 March 2026 01:07:23 +0000 (0:00:00.432) 0:02:46.132 ********** 2026-03-13 01:12:32.640175 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 01:12:32.640182 | orchestrator | 2026-03-13 01:12:32.640189 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-03-13 01:12:32.640195 | orchestrator | Friday 13 March 2026 01:07:24 +0000 (0:00:00.566) 0:02:46.698 ********** 2026-03-13 01:12:32.640202 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-03-13 01:12:32.640208 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-03-13 01:12:32.640212 | orchestrator | 2026-03-13 01:12:32.640215 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-03-13 01:12:32.640219 | orchestrator | Friday 13 March 2026 01:07:26 +0000 (0:00:02.865) 0:02:49.564 ********** 2026-03-13 01:12:32.640223 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-03-13 01:12:32.640227 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-03-13 01:12:32.640231 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-03-13 01:12:32.640235 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-03-13 01:12:32.640239 | orchestrator | 2026-03-13 01:12:32.640242 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-03-13 01:12:32.640246 | orchestrator | Friday 13 March 2026 01:07:33 +0000 (0:00:06.252) 0:02:55.816 ********** 2026-03-13 01:12:32.640250 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-13 01:12:32.640253 | orchestrator | 2026-03-13 01:12:32.640257 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-03-13 01:12:32.640261 | orchestrator | Friday 13 March 2026 01:07:35 +0000 (0:00:02.782) 0:02:58.599 ********** 2026-03-13 01:12:32.640265 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-03-13 01:12:32.640268 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-13 01:12:32.640272 | orchestrator | 2026-03-13 01:12:32.640276 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-03-13 01:12:32.640280 | orchestrator | Friday 13 March 2026 01:07:39 +0000 (0:00:03.428) 0:03:02.027 ********** 2026-03-13 01:12:32.640283 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-13 01:12:32.640287 | orchestrator | 2026-03-13 01:12:32.640291 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-03-13 01:12:32.640295 | orchestrator | Friday 13 March 2026 01:07:42 +0000 (0:00:03.207) 0:03:05.235 ********** 2026-03-13 01:12:32.640298 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-03-13 01:12:32.640307 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-03-13 01:12:32.640311 | orchestrator | 2026-03-13 01:12:32.640314 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-13 01:12:32.640323 | orchestrator | Friday 13 March 2026 01:07:49 +0000 (0:00:06.737) 0:03:11.972 ********** 2026-03-13 01:12:32.640330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-13 01:12:32.640337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-13 01:12:32.640342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-13 01:12:32.640353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-13 01:12:32.640358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-13 01:12:32.640362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-13 01:12:32.640366 | orchestrator | 2026-03-13 01:12:32.640370 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-03-13 01:12:32.640374 | orchestrator | Friday 13 March 2026 01:07:50 +0000 (0:00:01.267) 0:03:13.240 ********** 2026-03-13 01:12:32.640378 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:12:32.640382 | orchestrator | 2026-03-13 01:12:32.640388 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-03-13 01:12:32.640394 | orchestrator | Friday 13 March 2026 01:07:50 +0000 (0:00:00.131) 0:03:13.371 ********** 2026-03-13 01:12:32.640406 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:12:32.640412 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:12:32.640418 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:12:32.640424 | orchestrator | 2026-03-13 01:12:32.640430 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-03-13 01:12:32.640436 | orchestrator | Friday 13 March 2026 01:07:51 +0000 (0:00:00.477) 0:03:13.849 ********** 2026-03-13 01:12:32.640442 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-13 01:12:32.640448 | orchestrator | 2026-03-13 01:12:32.640454 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-03-13 01:12:32.640460 | orchestrator | Friday 13 March 2026 01:07:51 +0000 (0:00:00.770) 0:03:14.619 ********** 2026-03-13 01:12:32.640466 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:12:32.640472 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:12:32.640477 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:12:32.640483 | orchestrator | 2026-03-13 01:12:32.640490 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-13 01:12:32.640496 | orchestrator | Friday 13 March 2026 01:07:52 +0000 (0:00:00.315) 0:03:14.934 ********** 2026-03-13 01:12:32.640502 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 01:12:32.640508 | orchestrator | 2026-03-13 01:12:32.640514 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-13 01:12:32.640520 | orchestrator | Friday 13 March 2026 01:07:52 +0000 (0:00:00.527) 0:03:15.462 ********** 2026-03-13 01:12:32.640528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-13 01:12:32.640545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-13 01:12:32.640553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-13 01:12:32.640561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-13 01:12:32.640572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-13 01:12:32.640583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-13 01:12:32.640590 | orchestrator | 2026-03-13 01:12:32.640597 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-13 01:12:32.640603 | orchestrator | Friday 13 March 2026 01:07:55 +0000 (0:00:02.730) 0:03:18.192 ********** 2026-03-13 01:12:32.640611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-13 01:12:32.640615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-13 01:12:32.640619 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:12:32.640625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-13 01:12:32.640636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-13 01:12:32.640643 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:12:32.640654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-13 01:12:32.640662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-13 01:12:32.640669 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:12:32.640675 | orchestrator | 2026-03-13 01:12:32.640681 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-13 01:12:32.640688 | orchestrator | Friday 13 March 2026 01:07:56 +0000 (0:00:00.592) 0:03:18.785 ********** 2026-03-13 01:12:32.640694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-13 01:12:32.640705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-13 01:12:32.640711 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:12:32.641248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-13 01:12:32.641268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-13 01:12:32.641275 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:12:32.641282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-13 01:12:32.641295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-13 01:12:32.641301 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:12:32.641308 | orchestrator | 2026-03-13 01:12:32.641314 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-03-13 01:12:32.641321 | orchestrator | Friday 13 March 2026 01:07:56 +0000 (0:00:00.817) 0:03:19.602 ********** 2026-03-13 01:12:32.641345 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-13 01:12:32.641354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-13 01:12:32.641362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-13 01:12:32.641373 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-13 01:12:32.641397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-13 01:12:32.641404 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-13 01:12:32.641411 | orchestrator | 2026-03-13 01:12:32.641417 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-03-13 01:12:32.641424 | orchestrator | Friday 13 March 2026 01:07:59 +0000 (0:00:02.457) 0:03:22.060 ********** 2026-03-13 01:12:32.641431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-13 01:12:32.641442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-13 01:12:32.641466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-13 01:12:32.641474 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-13 01:12:32.641481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-13 01:12:32.641490 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-13 01:12:32.641497 | orchestrator | 2026-03-13 01:12:32.641503 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-03-13 01:12:32.641509 | orchestrator | Friday 13 March 2026 01:08:04 +0000 (0:00:05.302) 0:03:27.362 ********** 2026-03-13 01:12:32.641516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-13 01:12:32.641549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-13 01:12:32.641556 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:12:32.641563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-13 01:12:32.641578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-13 01:12:32.641585 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:12:32.641592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-13 01:12:32.641599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-13 01:12:32.641606 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:12:32.641612 | orchestrator | 2026-03-13 01:12:32.641618 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-03-13 01:12:32.641625 | orchestrator | Friday 13 March 2026 01:08:05 +0000 (0:00:00.639) 0:03:28.002 ********** 2026-03-13 01:12:32.641632 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:12:32.641637 | orchestrator | changed: [testbed-node-1] 2026-03-13 01:12:32.641641 | orchestrator | changed: [testbed-node-2] 2026-03-13 01:12:32.641645 | orchestrator | 2026-03-13 01:12:32.641660 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-03-13 01:12:32.641666 | orchestrator | Friday 13 March 2026 01:08:06 +0000 (0:00:01.484) 0:03:29.487 ********** 2026-03-13 01:12:32.641672 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:12:32.641678 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:12:32.641684 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:12:32.641690 | orchestrator | 2026-03-13 01:12:32.641697 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-03-13 01:12:32.641703 | orchestrator | Friday 13 March 2026 01:08:07 +0000 (0:00:00.315) 0:03:29.802 ********** 2026-03-13 01:12:32.641710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-13 01:12:32.641722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-13 01:12:32.641749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-13 01:12:32.641758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-13 01:12:32.641769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-13 01:12:32.641775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-13 01:12:32.641779 | orchestrator | 2026-03-13 01:12:32.641783 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-13 01:12:32.641787 | orchestrator | Friday 13 March 2026 01:08:09 +0000 (0:00:02.135) 0:03:31.938 ********** 2026-03-13 01:12:32.641790 | orchestrator | 2026-03-13 01:12:32.641794 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-13 01:12:32.641798 | orchestrator | Friday 13 March 2026 01:08:09 +0000 (0:00:00.189) 0:03:32.128 ********** 2026-03-13 01:12:32.641802 | orchestrator | 2026-03-13 01:12:32.641805 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-13 01:12:32.641809 | orchestrator | Friday 13 March 2026 01:08:09 +0000 (0:00:00.217) 0:03:32.346 ********** 2026-03-13 01:12:32.641813 | orchestrator | 2026-03-13 01:12:32.641816 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-03-13 01:12:32.641820 | orchestrator | Friday 13 March 2026 01:08:09 +0000 (0:00:00.222) 0:03:32.568 ********** 2026-03-13 01:12:32.641824 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:12:32.641827 | orchestrator | changed: [testbed-node-1] 2026-03-13 01:12:32.641831 | orchestrator | changed: [testbed-node-2] 2026-03-13 01:12:32.641835 | orchestrator | 2026-03-13 01:12:32.641838 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-03-13 01:12:32.641842 | orchestrator | Friday 13 March 2026 01:08:23 +0000 (0:00:13.791) 0:03:46.360 ********** 2026-03-13 01:12:32.641846 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:12:32.641850 | orchestrator | changed: [testbed-node-1] 2026-03-13 01:12:32.641854 | orchestrator | changed: [testbed-node-2] 2026-03-13 01:12:32.641859 | orchestrator | 2026-03-13 01:12:32.641863 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-03-13 01:12:32.641867 | orchestrator | 2026-03-13 01:12:32.641872 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-13 01:12:32.641876 | orchestrator | Friday 13 March 2026 01:08:33 +0000 (0:00:09.989) 0:03:56.350 ********** 2026-03-13 01:12:32.641881 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 01:12:32.641885 | orchestrator | 2026-03-13 01:12:32.641890 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-13 01:12:32.641894 | orchestrator | Friday 13 March 2026 01:08:34 +0000 (0:00:01.187) 0:03:57.537 ********** 2026-03-13 01:12:32.641898 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:12:32.641998 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:12:32.642010 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:12:32.642038 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:12:32.642044 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:12:32.642048 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:12:32.642057 | orchestrator | 2026-03-13 01:12:32.642062 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-03-13 01:12:32.642066 | orchestrator | Friday 13 March 2026 01:08:35 +0000 (0:00:00.589) 0:03:58.127 ********** 2026-03-13 01:12:32.642070 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:12:32.642075 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:12:32.642079 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:12:32.642083 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-13 01:12:32.642088 | orchestrator | 2026-03-13 01:12:32.642096 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-13 01:12:32.642127 | orchestrator | Friday 13 March 2026 01:08:36 +0000 (0:00:01.022) 0:03:59.149 ********** 2026-03-13 01:12:32.642132 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-03-13 01:12:32.642136 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-03-13 01:12:32.642140 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-03-13 01:12:32.642144 | orchestrator | 2026-03-13 01:12:32.642149 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-13 01:12:32.642153 | orchestrator | Friday 13 March 2026 01:08:37 +0000 (0:00:00.668) 0:03:59.818 ********** 2026-03-13 01:12:32.642157 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-03-13 01:12:32.642162 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-03-13 01:12:32.642166 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-03-13 01:12:32.642170 | orchestrator | 2026-03-13 01:12:32.642175 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-13 01:12:32.642179 | orchestrator | Friday 13 March 2026 01:08:38 +0000 (0:00:01.239) 0:04:01.058 ********** 2026-03-13 01:12:32.642183 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-03-13 01:12:32.642188 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:12:32.642192 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-03-13 01:12:32.642196 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:12:32.642200 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-03-13 01:12:32.642204 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:12:32.642209 | orchestrator | 2026-03-13 01:12:32.642213 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-03-13 01:12:32.642217 | orchestrator | Friday 13 March 2026 01:08:38 +0000 (0:00:00.508) 0:04:01.566 ********** 2026-03-13 01:12:32.642222 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-13 01:12:32.642226 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-13 01:12:32.642230 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:12:32.642235 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-13 01:12:32.642239 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-13 01:12:32.642243 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:12:32.642247 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-13 01:12:32.642251 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-13 01:12:32.642256 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-13 01:12:32.642260 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:12:32.642264 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-13 01:12:32.642268 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-13 01:12:32.642273 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-13 01:12:32.642277 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-13 01:12:32.642281 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-13 01:12:32.642288 | orchestrator | 2026-03-13 01:12:32.642292 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-03-13 01:12:32.642297 | orchestrator | Friday 13 March 2026 01:08:40 +0000 (0:00:01.198) 0:04:02.764 ********** 2026-03-13 01:12:32.642301 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:12:32.642305 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:12:32.642310 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:12:32.642314 | orchestrator | changed: [testbed-node-3] 2026-03-13 01:12:32.642318 | orchestrator | changed: [testbed-node-4] 2026-03-13 01:12:32.642322 | orchestrator | changed: [testbed-node-5] 2026-03-13 01:12:32.642326 | orchestrator | 2026-03-13 01:12:32.642330 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-03-13 01:12:32.642335 | orchestrator | Friday 13 March 2026 01:08:41 +0000 (0:00:01.123) 0:04:03.888 ********** 2026-03-13 01:12:32.642339 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:12:32.642343 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:12:32.642348 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:12:32.642352 | orchestrator | changed: [testbed-node-4] 2026-03-13 01:12:32.642356 | orchestrator | changed: [testbed-node-3] 2026-03-13 01:12:32.642360 | orchestrator | changed: [testbed-node-5] 2026-03-13 01:12:32.642364 | orchestrator | 2026-03-13 01:12:32.642369 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-13 01:12:32.642373 | orchestrator | Friday 13 March 2026 01:08:42 +0000 (0:00:01.660) 0:04:05.549 ********** 2026-03-13 01:12:32.642378 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-13 01:12:32.642396 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-13 01:12:32.642401 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-13 01:12:32.642409 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-13 01:12:32.642414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-13 01:12:32.642419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-13 01:12:32.642423 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-13 01:12:32.642439 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-13 01:12:32.642444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-13 01:12:32.642449 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-13 01:12:32.642455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-13 01:12:32.642460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-13 01:12:32.642464 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-13 01:12:32.642480 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-13 01:12:32.642485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-13 01:12:32.642489 | orchestrator | 2026-03-13 01:12:32.642494 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-13 01:12:32.642501 | orchestrator | Friday 13 March 2026 01:08:44 +0000 (0:00:01.976) 0:04:07.525 ********** 2026-03-13 01:12:32.642506 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 01:12:32.642511 | orchestrator | 2026-03-13 01:12:32.642515 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-13 01:12:32.642519 | orchestrator | Friday 13 March 2026 01:08:46 +0000 (0:00:01.220) 0:04:08.745 ********** 2026-03-13 01:12:32.642524 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-13 01:12:32.642529 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-13 01:12:32.642544 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-13 01:12:32.642549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-13 01:12:32.642553 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-13 01:12:32.642560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-13 01:12:32.642565 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-13 01:12:32.642569 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-13 01:12:32.642574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-13 01:12:32.642590 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-13 01:12:32.642596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-13 01:12:32.642607 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-13 01:12:32.642615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-13 01:12:32.642625 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-13 01:12:32.642631 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-13 01:12:32.642637 | orchestrator | 2026-03-13 01:12:32.642643 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-13 01:12:32.642650 | orchestrator | Friday 13 March 2026 01:08:48 +0000 (0:00:02.912) 0:04:11.658 ********** 2026-03-13 01:12:32.642683 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-13 01:12:32.642702 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-13 01:12:32.642709 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-13 01:12:32.642715 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:12:32.642722 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-13 01:12:32.642728 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-13 01:12:32.642752 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-13 01:12:32.642759 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:12:32.642765 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-13 01:12:32.642775 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-13 01:12:32.642782 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-13 01:12:32.642788 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:12:32.642795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-13 01:12:32.642801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-13 01:12:32.642808 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:12:32.642832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-13 01:12:32.642843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-13 01:12:32.642849 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:12:32.642856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-13 01:12:32.642863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-13 01:12:32.642870 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:12:32.642876 | orchestrator | 2026-03-13 01:12:32.642883 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-13 01:12:32.642889 | orchestrator | Friday 13 March 2026 01:08:50 +0000 (0:00:01.609) 0:04:13.268 ********** 2026-03-13 01:12:32.642895 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-13 01:12:32.642936 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-13 01:12:32.642969 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-13 01:12:32.642977 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-13 01:12:32.642984 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-13 01:12:32.642991 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-13 01:12:32.642997 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:12:32.643002 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:12:32.643009 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-13 01:12:32.643014 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-13 01:12:32.643039 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-13 01:12:32.643045 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:12:32.643051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-13 01:12:32.643057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-13 01:12:32.643062 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:12:32.643068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-13 01:12:32.643074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-13 01:12:32.643080 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:12:32.643085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-13 01:12:32.643113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-13 01:12:32.643119 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:12:32.643125 | orchestrator | 2026-03-13 01:12:32.643131 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-13 01:12:32.643137 | orchestrator | Friday 13 March 2026 01:08:52 +0000 (0:00:02.109) 0:04:15.377 ********** 2026-03-13 01:12:32.643151 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:12:32.643158 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:12:32.643164 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:12:32.643170 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-13 01:12:32.643176 | orchestrator | 2026-03-13 01:12:32.643182 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-03-13 01:12:32.643187 | orchestrator | Friday 13 March 2026 01:08:53 +0000 (0:00:01.018) 0:04:16.396 ********** 2026-03-13 01:12:32.643193 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-13 01:12:32.643199 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-13 01:12:32.643205 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-13 01:12:32.643211 | orchestrator | 2026-03-13 01:12:32.643218 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-03-13 01:12:32.643224 | orchestrator | Friday 13 March 2026 01:08:54 +0000 (0:00:00.961) 0:04:17.358 ********** 2026-03-13 01:12:32.643231 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-13 01:12:32.643237 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-13 01:12:32.643243 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-13 01:12:32.643250 | orchestrator | 2026-03-13 01:12:32.643256 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-03-13 01:12:32.643262 | orchestrator | Friday 13 March 2026 01:08:55 +0000 (0:00:01.025) 0:04:18.383 ********** 2026-03-13 01:12:32.643268 | orchestrator | ok: [testbed-node-3] 2026-03-13 01:12:32.643275 | orchestrator | ok: [testbed-node-4] 2026-03-13 01:12:32.643281 | orchestrator | ok: [testbed-node-5] 2026-03-13 01:12:32.643287 | orchestrator | 2026-03-13 01:12:32.643293 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-03-13 01:12:32.643299 | orchestrator | Friday 13 March 2026 01:08:56 +0000 (0:00:00.526) 0:04:18.909 ********** 2026-03-13 01:12:32.643305 | orchestrator | ok: [testbed-node-3] 2026-03-13 01:12:32.643311 | orchestrator | ok: [testbed-node-4] 2026-03-13 01:12:32.643317 | orchestrator | ok: [testbed-node-5] 2026-03-13 01:12:32.643323 | orchestrator | 2026-03-13 01:12:32.643330 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-03-13 01:12:32.643336 | orchestrator | Friday 13 March 2026 01:08:56 +0000 (0:00:00.714) 0:04:19.624 ********** 2026-03-13 01:12:32.643343 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-13 01:12:32.643348 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-13 01:12:32.643351 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-13 01:12:32.643355 | orchestrator | 2026-03-13 01:12:32.643359 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-03-13 01:12:32.643368 | orchestrator | Friday 13 March 2026 01:08:57 +0000 (0:00:01.016) 0:04:20.640 ********** 2026-03-13 01:12:32.643371 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-13 01:12:32.643375 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-13 01:12:32.643379 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-13 01:12:32.643385 | orchestrator | 2026-03-13 01:12:32.643391 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-03-13 01:12:32.643397 | orchestrator | Friday 13 March 2026 01:08:59 +0000 (0:00:01.114) 0:04:21.754 ********** 2026-03-13 01:12:32.643403 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-13 01:12:32.643409 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-13 01:12:32.643415 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-13 01:12:32.643422 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-03-13 01:12:32.643428 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-03-13 01:12:32.643434 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-03-13 01:12:32.643439 | orchestrator | 2026-03-13 01:12:32.643445 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-03-13 01:12:32.643451 | orchestrator | Friday 13 March 2026 01:09:03 +0000 (0:00:03.923) 0:04:25.678 ********** 2026-03-13 01:12:32.643457 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:12:32.643463 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:12:32.643469 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:12:32.643475 | orchestrator | 2026-03-13 01:12:32.643481 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-03-13 01:12:32.643488 | orchestrator | Friday 13 March 2026 01:09:03 +0000 (0:00:00.498) 0:04:26.176 ********** 2026-03-13 01:12:32.643494 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:12:32.643500 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:12:32.643506 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:12:32.643512 | orchestrator | 2026-03-13 01:12:32.643519 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-03-13 01:12:32.643525 | orchestrator | Friday 13 March 2026 01:09:03 +0000 (0:00:00.343) 0:04:26.520 ********** 2026-03-13 01:12:32.643531 | orchestrator | changed: [testbed-node-3] 2026-03-13 01:12:32.643537 | orchestrator | changed: [testbed-node-4] 2026-03-13 01:12:32.643543 | orchestrator | changed: [testbed-node-5] 2026-03-13 01:12:32.643549 | orchestrator | 2026-03-13 01:12:32.643556 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-03-13 01:12:32.643562 | orchestrator | Friday 13 March 2026 01:09:05 +0000 (0:00:01.350) 0:04:27.870 ********** 2026-03-13 01:12:32.643601 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-13 01:12:32.643608 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-13 01:12:32.643615 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-13 01:12:32.643621 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-13 01:12:32.643628 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-13 01:12:32.643634 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-13 01:12:32.643640 | orchestrator | 2026-03-13 01:12:32.643647 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-03-13 01:12:32.643654 | orchestrator | Friday 13 March 2026 01:09:08 +0000 (0:00:03.232) 0:04:31.103 ********** 2026-03-13 01:12:32.643665 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-13 01:12:32.643672 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-13 01:12:32.643678 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-13 01:12:32.643684 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-13 01:12:32.643690 | orchestrator | changed: [testbed-node-3] 2026-03-13 01:12:32.643696 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-13 01:12:32.643702 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-13 01:12:32.643709 | orchestrator | changed: [testbed-node-4] 2026-03-13 01:12:32.643715 | orchestrator | changed: [testbed-node-5] 2026-03-13 01:12:32.643721 | orchestrator | 2026-03-13 01:12:32.643728 | orchestrator | TASK [nova-cell : Include tasks from qemu_wrapper.yml] ************************* 2026-03-13 01:12:32.643734 | orchestrator | Friday 13 March 2026 01:09:11 +0000 (0:00:02.926) 0:04:34.030 ********** 2026-03-13 01:12:32.643741 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:12:32.643747 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:12:32.643753 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:12:32.643759 | orchestrator | included: /ansible/roles/nova-cell/tasks/qemu_wrapper.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-13 01:12:32.643765 | orchestrator | 2026-03-13 01:12:32.643771 | orchestrator | TASK [nova-cell : Check qemu wrapper file] ************************************* 2026-03-13 01:12:32.643777 | orchestrator | Friday 13 March 2026 01:09:12 +0000 (0:00:01.552) 0:04:35.582 ********** 2026-03-13 01:12:32.643784 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-13 01:12:32.643790 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-13 01:12:32.643796 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-13 01:12:32.643802 | orchestrator | 2026-03-13 01:12:32.643809 | orchestrator | TASK [nova-cell : Copy qemu wrapper] ******************************************* 2026-03-13 01:12:32.643815 | orchestrator | Friday 13 March 2026 01:09:14 +0000 (0:00:01.128) 0:04:36.711 ********** 2026-03-13 01:12:32.643821 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:12:32.643827 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:12:32.643834 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:12:32.643840 | orchestrator | 2026-03-13 01:12:32.643871 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-03-13 01:12:32.643878 | orchestrator | Friday 13 March 2026 01:09:14 +0000 (0:00:00.300) 0:04:37.011 ********** 2026-03-13 01:12:32.643884 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:12:32.643891 | orchestrator | 2026-03-13 01:12:32.643898 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-03-13 01:12:32.643916 | orchestrator | Friday 13 March 2026 01:09:14 +0000 (0:00:00.116) 0:04:37.127 ********** 2026-03-13 01:12:32.643922 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:12:32.643929 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:12:32.643935 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:12:32.643941 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:12:32.643947 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:12:32.643953 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:12:32.643958 | orchestrator | 2026-03-13 01:12:32.643963 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-03-13 01:12:32.643969 | orchestrator | Friday 13 March 2026 01:09:14 +0000 (0:00:00.546) 0:04:37.674 ********** 2026-03-13 01:12:32.643975 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-13 01:12:32.643981 | orchestrator | 2026-03-13 01:12:32.643987 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-03-13 01:12:32.643993 | orchestrator | Friday 13 March 2026 01:09:15 +0000 (0:00:00.936) 0:04:38.611 ********** 2026-03-13 01:12:32.643999 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:12:32.644005 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:12:32.644011 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:12:32.644017 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:12:32.644023 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:12:32.644033 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:12:32.644038 | orchestrator | 2026-03-13 01:12:32.644044 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-03-13 01:12:32.644049 | orchestrator | Friday 13 March 2026 01:09:16 +0000 (0:00:00.588) 0:04:39.199 ********** 2026-03-13 01:12:32.644061 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-13 01:12:32.644072 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-13 01:12:32.644080 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-13 01:12:32.644086 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-13 01:12:32.644094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-13 01:12:32.644108 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-13 01:12:32.644120 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-13 01:12:32.644129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-13 01:12:32.644136 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-13 01:12:32.644142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-13 01:12:32.644148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-13 01:12:32.644156 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-13 01:12:32.644172 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-13 01:12:32.644182 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-13 01:12:32.644189 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-13 01:12:32.644195 | orchestrator | 2026-03-13 01:12:32.644202 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-03-13 01:12:32.644208 | orchestrator | Friday 13 March 2026 01:09:20 +0000 (0:00:03.683) 0:04:42.883 ********** 2026-03-13 01:12:32.644215 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-13 01:12:32.644222 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-13 01:12:32.644232 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-13 01:12:32.644245 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-13 01:12:32.644253 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-13 01:12:32.644261 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-13 01:12:32.644267 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-13 01:12:32.644279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-13 01:12:32.644289 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-13 01:12:32.644298 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-13 01:12:32.644305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-13 01:12:32.644311 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-13 01:12:32.644318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-13 01:12:32.644332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-13 01:12:32.644339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-13 01:12:32.644345 | orchestrator | 2026-03-13 01:12:32.644351 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-03-13 01:12:32.644358 | orchestrator | Friday 13 March 2026 01:09:26 +0000 (0:00:05.802) 0:04:48.685 ********** 2026-03-13 01:12:32.644365 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:12:32.644371 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:12:32.644377 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:12:32.644384 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:12:32.644393 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:12:32.644400 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:12:32.644406 | orchestrator | 2026-03-13 01:12:32.644412 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-03-13 01:12:32.644419 | orchestrator | Friday 13 March 2026 01:09:27 +0000 (0:00:01.665) 0:04:50.351 ********** 2026-03-13 01:12:32.644425 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-13 01:12:32.644431 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-13 01:12:32.644440 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-13 01:12:32.644447 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-13 01:12:32.644453 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-13 01:12:32.644459 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-13 01:12:32.644465 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:12:32.644470 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-13 01:12:32.644476 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:12:32.644482 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-13 01:12:32.644489 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:12:32.644495 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-13 01:12:32.644501 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-13 01:12:32.644508 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-13 01:12:32.644514 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-13 01:12:32.644520 | orchestrator | 2026-03-13 01:12:32.644527 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-03-13 01:12:32.644538 | orchestrator | Friday 13 March 2026 01:09:31 +0000 (0:00:03.850) 0:04:54.201 ********** 2026-03-13 01:12:32.644544 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:12:32.644550 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:12:32.644556 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:12:32.644563 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:12:32.644569 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:12:32.644575 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:12:32.644581 | orchestrator | 2026-03-13 01:12:32.644588 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-03-13 01:12:32.644594 | orchestrator | Friday 13 March 2026 01:09:32 +0000 (0:00:00.568) 0:04:54.770 ********** 2026-03-13 01:12:32.644601 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-13 01:12:32.644607 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-13 01:12:32.644613 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-13 01:12:32.644620 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-13 01:12:32.644626 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-13 01:12:32.644632 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-13 01:12:32.644639 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-13 01:12:32.644645 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-13 01:12:32.644651 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-13 01:12:32.644658 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-13 01:12:32.644664 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:12:32.644670 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-13 01:12:32.644677 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:12:32.644683 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-13 01:12:32.644690 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:12:32.644696 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-13 01:12:32.644703 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-13 01:12:32.644709 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-13 01:12:32.644715 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-13 01:12:32.644726 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-13 01:12:32.644732 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-13 01:12:32.644738 | orchestrator | 2026-03-13 01:12:32.644745 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-03-13 01:12:32.644751 | orchestrator | Friday 13 March 2026 01:09:37 +0000 (0:00:05.164) 0:04:59.934 ********** 2026-03-13 01:12:32.644761 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-13 01:12:32.644768 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-13 01:12:32.644778 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-13 01:12:32.644784 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-13 01:12:32.644791 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-13 01:12:32.644798 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-13 01:12:32.644804 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-13 01:12:32.644810 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-13 01:12:32.644815 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-13 01:12:32.644819 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-13 01:12:32.644823 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-13 01:12:32.644828 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-13 01:12:32.644832 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-13 01:12:32.644836 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:12:32.644841 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-13 01:12:32.644845 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:12:32.644849 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-13 01:12:32.644853 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:12:32.644858 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-13 01:12:32.644862 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-13 01:12:32.644866 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-13 01:12:32.644870 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-13 01:12:32.644875 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-13 01:12:32.644879 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-13 01:12:32.644883 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-13 01:12:32.644888 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-13 01:12:32.644892 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-13 01:12:32.644896 | orchestrator | 2026-03-13 01:12:32.644914 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-03-13 01:12:32.644922 | orchestrator | Friday 13 March 2026 01:09:43 +0000 (0:00:06.594) 0:05:06.529 ********** 2026-03-13 01:12:32.644929 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:12:32.644933 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:12:32.644937 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:12:32.644942 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:12:32.644946 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:12:32.644951 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:12:32.644955 | orchestrator | 2026-03-13 01:12:32.644959 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-03-13 01:12:32.644963 | orchestrator | Friday 13 March 2026 01:09:44 +0000 (0:00:00.638) 0:05:07.167 ********** 2026-03-13 01:12:32.644968 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:12:32.644972 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:12:32.644976 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:12:32.644981 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:12:32.644988 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:12:32.644992 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:12:32.644997 | orchestrator | 2026-03-13 01:12:32.645001 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-03-13 01:12:32.645005 | orchestrator | Friday 13 March 2026 01:09:45 +0000 (0:00:00.506) 0:05:07.674 ********** 2026-03-13 01:12:32.645009 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:12:32.645014 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:12:32.645018 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:12:32.645022 | orchestrator | changed: [testbed-node-4] 2026-03-13 01:12:32.645027 | orchestrator | changed: [testbed-node-3] 2026-03-13 01:12:32.645031 | orchestrator | changed: [testbed-node-5] 2026-03-13 01:12:32.645035 | orchestrator | 2026-03-13 01:12:32.645039 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-03-13 01:12:32.645044 | orchestrator | Friday 13 March 2026 01:09:47 +0000 (0:00:02.007) 0:05:09.682 ********** 2026-03-13 01:12:32.645055 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-13 01:12:32.645060 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-13 01:12:32.645065 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-13 01:12:32.645070 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:12:32.645075 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-13 01:12:32.645082 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-13 01:12:32.645091 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-13 01:12:32.645095 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:12:32.645102 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-13 01:12:32.645106 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-13 01:12:32.645111 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-13 01:12:32.645115 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:12:32.645122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-13 01:12:32.645127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-13 01:12:32.645132 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:12:32.645140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-13 01:12:32.645147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-13 01:12:32.645153 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:12:32.645160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-13 01:12:32.645167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-13 01:12:32.645173 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:12:32.645179 | orchestrator | 2026-03-13 01:12:32.645185 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-03-13 01:12:32.645191 | orchestrator | Friday 13 March 2026 01:09:48 +0000 (0:00:01.321) 0:05:11.003 ********** 2026-03-13 01:12:32.645202 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-13 01:12:32.645209 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-13 01:12:32.645215 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:12:32.645222 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-13 01:12:32.645229 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-13 01:12:32.645235 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:12:32.645242 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-13 01:12:32.645248 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-13 01:12:32.645253 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:12:32.645258 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-13 01:12:32.645262 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-13 01:12:32.645266 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:12:32.645271 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-13 01:12:32.645275 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-13 01:12:32.645279 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:12:32.645284 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-13 01:12:32.645288 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-13 01:12:32.645292 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:12:32.645296 | orchestrator | 2026-03-13 01:12:32.645300 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-03-13 01:12:32.645304 | orchestrator | Friday 13 March 2026 01:09:49 +0000 (0:00:00.754) 0:05:11.758 ********** 2026-03-13 01:12:32.645316 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-13 01:12:32.645327 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-13 01:12:32.645334 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-13 01:12:32.645346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-13 01:12:32.645353 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-13 01:12:32.645361 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-13 01:12:32.645372 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-13 01:12:32.645382 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-13 01:12:32.645390 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-13 01:12:32.645403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-13 01:12:32.645410 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-13 01:12:32.645416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-13 01:12:32.645427 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-13 01:12:32.645437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-13 01:12:32.645444 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-13 01:12:32.645455 | orchestrator | 2026-03-13 01:12:32.645462 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-13 01:12:32.645469 | orchestrator | Friday 13 March 2026 01:09:51 +0000 (0:00:02.354) 0:05:14.112 ********** 2026-03-13 01:12:32.645476 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:12:32.645483 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:12:32.645488 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:12:32.645492 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:12:32.645497 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:12:32.645501 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:12:32.645505 | orchestrator | 2026-03-13 01:12:32.645510 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-13 01:12:32.645514 | orchestrator | Friday 13 March 2026 01:09:52 +0000 (0:00:00.624) 0:05:14.736 ********** 2026-03-13 01:12:32.645518 | orchestrator | 2026-03-13 01:12:32.645523 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-13 01:12:32.645527 | orchestrator | Friday 13 March 2026 01:09:52 +0000 (0:00:00.117) 0:05:14.854 ********** 2026-03-13 01:12:32.645531 | orchestrator | 2026-03-13 01:12:32.645536 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-13 01:12:32.645540 | orchestrator | Friday 13 March 2026 01:09:52 +0000 (0:00:00.117) 0:05:14.971 ********** 2026-03-13 01:12:32.645544 | orchestrator | 2026-03-13 01:12:32.645549 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-13 01:12:32.645553 | orchestrator | Friday 13 March 2026 01:09:52 +0000 (0:00:00.119) 0:05:15.091 ********** 2026-03-13 01:12:32.645557 | orchestrator | 2026-03-13 01:12:32.645561 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-13 01:12:32.645566 | orchestrator | Friday 13 March 2026 01:09:52 +0000 (0:00:00.221) 0:05:15.313 ********** 2026-03-13 01:12:32.645570 | orchestrator | 2026-03-13 01:12:32.645574 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-13 01:12:32.645579 | orchestrator | Friday 13 March 2026 01:09:52 +0000 (0:00:00.115) 0:05:15.429 ********** 2026-03-13 01:12:32.645583 | orchestrator | 2026-03-13 01:12:32.645587 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-03-13 01:12:32.645591 | orchestrator | Friday 13 March 2026 01:09:52 +0000 (0:00:00.117) 0:05:15.546 ********** 2026-03-13 01:12:32.645596 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:12:32.645600 | orchestrator | changed: [testbed-node-1] 2026-03-13 01:12:32.645604 | orchestrator | changed: [testbed-node-2] 2026-03-13 01:12:32.645608 | orchestrator | 2026-03-13 01:12:32.645613 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-03-13 01:12:32.645617 | orchestrator | Friday 13 March 2026 01:09:58 +0000 (0:00:05.845) 0:05:21.392 ********** 2026-03-13 01:12:32.645621 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:12:32.645625 | orchestrator | changed: [testbed-node-1] 2026-03-13 01:12:32.645630 | orchestrator | changed: [testbed-node-2] 2026-03-13 01:12:32.645635 | orchestrator | 2026-03-13 01:12:32.645642 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-03-13 01:12:32.645648 | orchestrator | Friday 13 March 2026 01:10:10 +0000 (0:00:11.706) 0:05:33.100 ********** 2026-03-13 01:12:32.645654 | orchestrator | changed: [testbed-node-3] 2026-03-13 01:12:32.645660 | orchestrator | changed: [testbed-node-4] 2026-03-13 01:12:32.645666 | orchestrator | changed: [testbed-node-5] 2026-03-13 01:12:32.645672 | orchestrator | 2026-03-13 01:12:32.645679 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-03-13 01:12:32.645685 | orchestrator | Friday 13 March 2026 01:10:25 +0000 (0:00:15.421) 0:05:48.521 ********** 2026-03-13 01:12:32.645692 | orchestrator | changed: [testbed-node-3] 2026-03-13 01:12:32.645699 | orchestrator | changed: [testbed-node-4] 2026-03-13 01:12:32.645710 | orchestrator | changed: [testbed-node-5] 2026-03-13 01:12:32.645715 | orchestrator | 2026-03-13 01:12:32.645720 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-03-13 01:12:32.645724 | orchestrator | Friday 13 March 2026 01:10:56 +0000 (0:00:30.686) 0:06:19.208 ********** 2026-03-13 01:12:32.645731 | orchestrator | changed: [testbed-node-4] 2026-03-13 01:12:32.645736 | orchestrator | changed: [testbed-node-3] 2026-03-13 01:12:32.645740 | orchestrator | changed: [testbed-node-5] 2026-03-13 01:12:32.645744 | orchestrator | 2026-03-13 01:12:32.645748 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-03-13 01:12:32.645753 | orchestrator | Friday 13 March 2026 01:10:57 +0000 (0:00:00.858) 0:06:20.066 ********** 2026-03-13 01:12:32.645757 | orchestrator | changed: [testbed-node-3] 2026-03-13 01:12:32.645761 | orchestrator | changed: [testbed-node-4] 2026-03-13 01:12:32.645766 | orchestrator | changed: [testbed-node-5] 2026-03-13 01:12:32.645770 | orchestrator | 2026-03-13 01:12:32.645774 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-03-13 01:12:32.645781 | orchestrator | Friday 13 March 2026 01:10:58 +0000 (0:00:00.852) 0:06:20.918 ********** 2026-03-13 01:12:32.645785 | orchestrator | changed: [testbed-node-4] 2026-03-13 01:12:32.645789 | orchestrator | changed: [testbed-node-3] 2026-03-13 01:12:32.645794 | orchestrator | changed: [testbed-node-5] 2026-03-13 01:12:32.645798 | orchestrator | 2026-03-13 01:12:32.645802 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-03-13 01:12:32.645806 | orchestrator | Friday 13 March 2026 01:11:21 +0000 (0:00:23.209) 0:06:44.127 ********** 2026-03-13 01:12:32.645810 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:12:32.645815 | orchestrator | 2026-03-13 01:12:32.645819 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-03-13 01:12:32.645823 | orchestrator | Friday 13 March 2026 01:11:21 +0000 (0:00:00.121) 0:06:44.249 ********** 2026-03-13 01:12:32.645827 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:12:32.645832 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:12:32.645836 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:12:32.645840 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:12:32.645845 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:12:32.645850 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-03-13 01:12:32.645854 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-13 01:12:32.645858 | orchestrator | 2026-03-13 01:12:32.645863 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-03-13 01:12:32.645867 | orchestrator | Friday 13 March 2026 01:11:43 +0000 (0:00:21.694) 0:07:05.943 ********** 2026-03-13 01:12:32.645871 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:12:32.645875 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:12:32.645881 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:12:32.645888 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:12:32.645894 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:12:32.645900 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:12:32.645939 | orchestrator | 2026-03-13 01:12:32.645946 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-03-13 01:12:32.645953 | orchestrator | Friday 13 March 2026 01:11:51 +0000 (0:00:08.326) 0:07:14.270 ********** 2026-03-13 01:12:32.645961 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:12:32.645967 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:12:32.645973 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:12:32.645980 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:12:32.645987 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:12:32.645994 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-5 2026-03-13 01:12:32.646001 | orchestrator | 2026-03-13 01:12:32.646008 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-13 01:12:32.646065 | orchestrator | Friday 13 March 2026 01:11:55 +0000 (0:00:03.459) 0:07:17.730 ********** 2026-03-13 01:12:32.646074 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-13 01:12:32.646081 | orchestrator | 2026-03-13 01:12:32.646087 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-13 01:12:32.646093 | orchestrator | Friday 13 March 2026 01:12:08 +0000 (0:00:13.773) 0:07:31.503 ********** 2026-03-13 01:12:32.646099 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-13 01:12:32.646105 | orchestrator | 2026-03-13 01:12:32.646112 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-03-13 01:12:32.646118 | orchestrator | Friday 13 March 2026 01:12:10 +0000 (0:00:01.455) 0:07:32.959 ********** 2026-03-13 01:12:32.646123 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:12:32.646128 | orchestrator | 2026-03-13 01:12:32.646134 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-03-13 01:12:32.646140 | orchestrator | Friday 13 March 2026 01:12:11 +0000 (0:00:01.418) 0:07:34.377 ********** 2026-03-13 01:12:32.646147 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-13 01:12:32.646153 | orchestrator | 2026-03-13 01:12:32.646159 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-03-13 01:12:32.646164 | orchestrator | Friday 13 March 2026 01:12:23 +0000 (0:00:11.584) 0:07:45.961 ********** 2026-03-13 01:12:32.646170 | orchestrator | ok: [testbed-node-3] 2026-03-13 01:12:32.646177 | orchestrator | ok: [testbed-node-4] 2026-03-13 01:12:32.646182 | orchestrator | ok: [testbed-node-5] 2026-03-13 01:12:32.646188 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:12:32.646194 | orchestrator | ok: [testbed-node-1] 2026-03-13 01:12:32.646200 | orchestrator | ok: [testbed-node-2] 2026-03-13 01:12:32.646206 | orchestrator | 2026-03-13 01:12:32.646212 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-03-13 01:12:32.646218 | orchestrator | 2026-03-13 01:12:32.646225 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-03-13 01:12:32.646231 | orchestrator | Friday 13 March 2026 01:12:24 +0000 (0:00:01.596) 0:07:47.557 ********** 2026-03-13 01:12:32.646236 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:12:32.646243 | orchestrator | changed: [testbed-node-1] 2026-03-13 01:12:32.646250 | orchestrator | changed: [testbed-node-2] 2026-03-13 01:12:32.646257 | orchestrator | 2026-03-13 01:12:32.646263 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-03-13 01:12:32.646270 | orchestrator | 2026-03-13 01:12:32.646277 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-03-13 01:12:32.646284 | orchestrator | Friday 13 March 2026 01:12:25 +0000 (0:00:01.066) 0:07:48.624 ********** 2026-03-13 01:12:32.646298 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:12:32.646305 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:12:32.646312 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:12:32.646319 | orchestrator | 2026-03-13 01:12:32.646326 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-03-13 01:12:32.646331 | orchestrator | 2026-03-13 01:12:32.646337 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-03-13 01:12:32.646343 | orchestrator | Friday 13 March 2026 01:12:26 +0000 (0:00:00.515) 0:07:49.140 ********** 2026-03-13 01:12:32.646350 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-03-13 01:12:32.646362 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-13 01:12:32.646368 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-13 01:12:32.646376 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-03-13 01:12:32.646383 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-03-13 01:12:32.646390 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-03-13 01:12:32.646397 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:12:32.646403 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-03-13 01:12:32.646416 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-13 01:12:32.646423 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-13 01:12:32.646429 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-03-13 01:12:32.646435 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-03-13 01:12:32.646441 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-03-13 01:12:32.646447 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-03-13 01:12:32.646453 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-13 01:12:32.646460 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-13 01:12:32.646466 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-03-13 01:12:32.646472 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-03-13 01:12:32.646478 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-03-13 01:12:32.646484 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:12:32.646490 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-03-13 01:12:32.646497 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-13 01:12:32.646502 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-13 01:12:32.646509 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-03-13 01:12:32.646513 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-03-13 01:12:32.646517 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-03-13 01:12:32.646520 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:12:32.646524 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-03-13 01:12:32.646528 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-13 01:12:32.646531 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-13 01:12:32.646535 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-03-13 01:12:32.646539 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-03-13 01:12:32.646542 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:12:32.646546 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-03-13 01:12:32.646550 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:12:32.646553 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-03-13 01:12:32.646557 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-13 01:12:32.646561 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-13 01:12:32.646564 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-03-13 01:12:32.646568 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-03-13 01:12:32.646572 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-03-13 01:12:32.646576 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:12:32.646579 | orchestrator | 2026-03-13 01:12:32.646583 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-03-13 01:12:32.646587 | orchestrator | 2026-03-13 01:12:32.646591 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-03-13 01:12:32.646594 | orchestrator | Friday 13 March 2026 01:12:27 +0000 (0:00:01.293) 0:07:50.433 ********** 2026-03-13 01:12:32.646598 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-03-13 01:12:32.646602 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-03-13 01:12:32.646605 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:12:32.646609 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-03-13 01:12:32.646613 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-03-13 01:12:32.646616 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:12:32.646620 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-03-13 01:12:32.646627 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-03-13 01:12:32.646630 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:12:32.646634 | orchestrator | 2026-03-13 01:12:32.646638 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-03-13 01:12:32.646642 | orchestrator | 2026-03-13 01:12:32.646648 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-03-13 01:12:32.646654 | orchestrator | Friday 13 March 2026 01:12:28 +0000 (0:00:00.812) 0:07:51.245 ********** 2026-03-13 01:12:32.646660 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:12:32.646666 | orchestrator | 2026-03-13 01:12:32.646672 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-03-13 01:12:32.646678 | orchestrator | 2026-03-13 01:12:32.646689 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-03-13 01:12:32.646696 | orchestrator | Friday 13 March 2026 01:12:29 +0000 (0:00:00.639) 0:07:51.885 ********** 2026-03-13 01:12:32.646702 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:12:32.646708 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:12:32.646714 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:12:32.646721 | orchestrator | 2026-03-13 01:12:32.646727 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 01:12:32.646734 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 01:12:32.646745 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=45  rescued=0 ignored=0 2026-03-13 01:12:32.646751 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=52  rescued=0 ignored=0 2026-03-13 01:12:32.646758 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=52  rescued=0 ignored=0 2026-03-13 01:12:32.646764 | orchestrator | testbed-node-3 : ok=40  changed=27  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-13 01:12:32.646771 | orchestrator | testbed-node-4 : ok=39  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-13 01:12:32.646777 | orchestrator | testbed-node-5 : ok=44  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-03-13 01:12:32.646784 | orchestrator | 2026-03-13 01:12:32.646790 | orchestrator | 2026-03-13 01:12:32.646796 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 01:12:32.646802 | orchestrator | Friday 13 March 2026 01:12:29 +0000 (0:00:00.614) 0:07:52.499 ********** 2026-03-13 01:12:32.646808 | orchestrator | =============================================================================== 2026-03-13 01:12:32.646815 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 32.76s 2026-03-13 01:12:32.646821 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 30.69s 2026-03-13 01:12:32.646827 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 23.21s 2026-03-13 01:12:32.646833 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 21.69s 2026-03-13 01:12:32.646839 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 20.33s 2026-03-13 01:12:32.646845 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 16.78s 2026-03-13 01:12:32.646851 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 15.42s 2026-03-13 01:12:32.646858 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 14.95s 2026-03-13 01:12:32.646864 | orchestrator | nova-cell : Create cell ------------------------------------------------ 13.89s 2026-03-13 01:12:32.646871 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 13.79s 2026-03-13 01:12:32.646881 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.77s 2026-03-13 01:12:32.646887 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.67s 2026-03-13 01:12:32.646894 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.19s 2026-03-13 01:12:32.646900 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 11.71s 2026-03-13 01:12:32.646919 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 11.58s 2026-03-13 01:12:32.646925 | orchestrator | nova : Restart nova-api container --------------------------------------- 9.99s 2026-03-13 01:12:32.646931 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 8.33s 2026-03-13 01:12:32.646937 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 7.90s 2026-03-13 01:12:32.646944 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 6.74s 2026-03-13 01:12:32.646950 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 6.59s 2026-03-13 01:12:32.646956 | orchestrator | 2026-03-13 01:12:32 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:12:32.646963 | orchestrator | 2026-03-13 01:12:32 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:12:35.684868 | orchestrator | 2026-03-13 01:12:35 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:12:35.684952 | orchestrator | 2026-03-13 01:12:35 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:12:38.737512 | orchestrator | 2026-03-13 01:12:38 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:12:38.737576 | orchestrator | 2026-03-13 01:12:38 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:12:41.779387 | orchestrator | 2026-03-13 01:12:41 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:12:41.779440 | orchestrator | 2026-03-13 01:12:41 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:12:44.827018 | orchestrator | 2026-03-13 01:12:44 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:12:44.827077 | orchestrator | 2026-03-13 01:12:44 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:12:47.874740 | orchestrator | 2026-03-13 01:12:47 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:12:47.874809 | orchestrator | 2026-03-13 01:12:47 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:12:50.927657 | orchestrator | 2026-03-13 01:12:50 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state STARTED 2026-03-13 01:12:50.927725 | orchestrator | 2026-03-13 01:12:50 | INFO  | Wait 1 second(s) until the next check 2026-03-13 01:12:53.980286 | orchestrator | 2026-03-13 01:12:53 | INFO  | Task 6685e78e-c366-430c-99eb-c698f723c8a2 is in state SUCCESS 2026-03-13 01:12:53.981567 | orchestrator | 2026-03-13 01:12:53.981607 | orchestrator | 2026-03-13 01:12:53.981615 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-13 01:12:53.981621 | orchestrator | 2026-03-13 01:12:53.981627 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-13 01:12:53.981631 | orchestrator | Friday 13 March 2026 01:08:35 +0000 (0:00:00.286) 0:00:00.286 ********** 2026-03-13 01:12:53.981635 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:12:53.981640 | orchestrator | ok: [testbed-node-1] 2026-03-13 01:12:53.981644 | orchestrator | ok: [testbed-node-2] 2026-03-13 01:12:53.981647 | orchestrator | 2026-03-13 01:12:53.981651 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-13 01:12:53.981655 | orchestrator | Friday 13 March 2026 01:08:35 +0000 (0:00:00.323) 0:00:00.610 ********** 2026-03-13 01:12:53.981658 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-03-13 01:12:53.981677 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-03-13 01:12:53.981681 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-03-13 01:12:53.981685 | orchestrator | 2026-03-13 01:12:53.981688 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-03-13 01:12:53.981692 | orchestrator | 2026-03-13 01:12:53.981696 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-13 01:12:53.981701 | orchestrator | Friday 13 March 2026 01:08:35 +0000 (0:00:00.442) 0:00:01.052 ********** 2026-03-13 01:12:53.981706 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 01:12:53.981711 | orchestrator | 2026-03-13 01:12:53.981717 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-03-13 01:12:53.981722 | orchestrator | Friday 13 March 2026 01:08:36 +0000 (0:00:00.548) 0:00:01.600 ********** 2026-03-13 01:12:53.981729 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-03-13 01:12:53.981734 | orchestrator | 2026-03-13 01:12:53.981739 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-03-13 01:12:53.981745 | orchestrator | Friday 13 March 2026 01:08:39 +0000 (0:00:03.236) 0:00:04.837 ********** 2026-03-13 01:12:53.981750 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-03-13 01:12:53.981754 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-03-13 01:12:53.981758 | orchestrator | 2026-03-13 01:12:53.981761 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-03-13 01:12:53.981765 | orchestrator | Friday 13 March 2026 01:08:45 +0000 (0:00:05.784) 0:00:10.621 ********** 2026-03-13 01:12:53.981769 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-13 01:12:53.981809 | orchestrator | 2026-03-13 01:12:53.981817 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-03-13 01:12:53.981823 | orchestrator | Friday 13 March 2026 01:08:48 +0000 (0:00:02.674) 0:00:13.296 ********** 2026-03-13 01:12:53.981828 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-13 01:12:53.981835 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-13 01:12:53.981839 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-13 01:12:53.981845 | orchestrator | 2026-03-13 01:12:53.981850 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-03-13 01:12:53.981855 | orchestrator | Friday 13 March 2026 01:08:54 +0000 (0:00:06.648) 0:00:19.944 ********** 2026-03-13 01:12:53.981861 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-13 01:12:53.981866 | orchestrator | 2026-03-13 01:12:53.981871 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-03-13 01:12:53.981876 | orchestrator | Friday 13 March 2026 01:08:57 +0000 (0:00:02.747) 0:00:22.692 ********** 2026-03-13 01:12:53.981916 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-13 01:12:53.981922 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-13 01:12:53.981927 | orchestrator | 2026-03-13 01:12:53.981937 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-03-13 01:12:53.981946 | orchestrator | Friday 13 March 2026 01:09:04 +0000 (0:00:07.239) 0:00:29.931 ********** 2026-03-13 01:12:53.981953 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-03-13 01:12:53.981959 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-03-13 01:12:53.982053 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-03-13 01:12:53.982061 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-03-13 01:12:53.982066 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-03-13 01:12:53.982071 | orchestrator | 2026-03-13 01:12:53.982077 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-13 01:12:53.982088 | orchestrator | Friday 13 March 2026 01:09:18 +0000 (0:00:14.146) 0:00:44.078 ********** 2026-03-13 01:12:53.982092 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 01:12:53.982096 | orchestrator | 2026-03-13 01:12:53.982102 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-03-13 01:12:53.982107 | orchestrator | Friday 13 March 2026 01:09:19 +0000 (0:00:00.589) 0:00:44.667 ********** 2026-03-13 01:12:53.982113 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:12:53.982120 | orchestrator | 2026-03-13 01:12:53.982133 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-03-13 01:12:53.982139 | orchestrator | Friday 13 March 2026 01:09:24 +0000 (0:00:05.025) 0:00:49.693 ********** 2026-03-13 01:12:53.982422 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:12:53.982430 | orchestrator | 2026-03-13 01:12:53.982436 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-13 01:12:53.982462 | orchestrator | Friday 13 March 2026 01:09:28 +0000 (0:00:04.370) 0:00:54.063 ********** 2026-03-13 01:12:53.982469 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:12:53.982473 | orchestrator | 2026-03-13 01:12:53.982476 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-03-13 01:12:53.982479 | orchestrator | Friday 13 March 2026 01:09:32 +0000 (0:00:03.747) 0:00:57.811 ********** 2026-03-13 01:12:53.982482 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-13 01:12:53.982485 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-13 01:12:53.982488 | orchestrator | 2026-03-13 01:12:53.982491 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-03-13 01:12:53.982494 | orchestrator | Friday 13 March 2026 01:09:41 +0000 (0:00:09.368) 0:01:07.179 ********** 2026-03-13 01:12:53.982497 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-03-13 01:12:53.982501 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-03-13 01:12:53.982504 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-03-13 01:12:53.982508 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-03-13 01:12:53.982511 | orchestrator | 2026-03-13 01:12:53.982514 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-03-13 01:12:53.982517 | orchestrator | Friday 13 March 2026 01:09:56 +0000 (0:00:14.804) 0:01:21.984 ********** 2026-03-13 01:12:53.982520 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:12:53.982523 | orchestrator | 2026-03-13 01:12:53.982526 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-03-13 01:12:53.982529 | orchestrator | Friday 13 March 2026 01:10:01 +0000 (0:00:04.880) 0:01:26.864 ********** 2026-03-13 01:12:53.982532 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:12:53.982535 | orchestrator | 2026-03-13 01:12:53.982539 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-03-13 01:12:53.982544 | orchestrator | Friday 13 March 2026 01:10:07 +0000 (0:00:05.386) 0:01:32.251 ********** 2026-03-13 01:12:53.982549 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:12:53.982554 | orchestrator | 2026-03-13 01:12:53.982559 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-03-13 01:12:53.982563 | orchestrator | Friday 13 March 2026 01:10:07 +0000 (0:00:00.204) 0:01:32.456 ********** 2026-03-13 01:12:53.982568 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:12:53.982573 | orchestrator | 2026-03-13 01:12:53.982578 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-13 01:12:53.982583 | orchestrator | Friday 13 March 2026 01:10:10 +0000 (0:00:03.297) 0:01:35.754 ********** 2026-03-13 01:12:53.982594 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-1, testbed-node-0, testbed-node-2 2026-03-13 01:12:53.982598 | orchestrator | 2026-03-13 01:12:53.982601 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-03-13 01:12:53.982604 | orchestrator | Friday 13 March 2026 01:10:11 +0000 (0:00:01.056) 0:01:36.810 ********** 2026-03-13 01:12:53.982607 | orchestrator | changed: [testbed-node-1] 2026-03-13 01:12:53.982612 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:12:53.982618 | orchestrator | changed: [testbed-node-2] 2026-03-13 01:12:53.982623 | orchestrator | 2026-03-13 01:12:53.982628 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-03-13 01:12:53.982633 | orchestrator | Friday 13 March 2026 01:10:16 +0000 (0:00:04.996) 0:01:41.807 ********** 2026-03-13 01:12:53.982636 | orchestrator | changed: [testbed-node-2] 2026-03-13 01:12:53.982639 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:12:53.982642 | orchestrator | changed: [testbed-node-1] 2026-03-13 01:12:53.982645 | orchestrator | 2026-03-13 01:12:53.982648 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-03-13 01:12:53.982651 | orchestrator | Friday 13 March 2026 01:10:20 +0000 (0:00:03.889) 0:01:45.696 ********** 2026-03-13 01:12:53.982654 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:12:53.982657 | orchestrator | changed: [testbed-node-1] 2026-03-13 01:12:53.982660 | orchestrator | changed: [testbed-node-2] 2026-03-13 01:12:53.982663 | orchestrator | 2026-03-13 01:12:53.982666 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-03-13 01:12:53.982669 | orchestrator | Friday 13 March 2026 01:10:21 +0000 (0:00:00.706) 0:01:46.402 ********** 2026-03-13 01:12:53.982672 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:12:53.982675 | orchestrator | ok: [testbed-node-1] 2026-03-13 01:12:53.982678 | orchestrator | ok: [testbed-node-2] 2026-03-13 01:12:53.982681 | orchestrator | 2026-03-13 01:12:53.982684 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-03-13 01:12:53.982687 | orchestrator | Friday 13 March 2026 01:10:23 +0000 (0:00:01.843) 0:01:48.246 ********** 2026-03-13 01:12:53.982692 | orchestrator | changed: [testbed-node-1] 2026-03-13 01:12:53.982698 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:12:53.982701 | orchestrator | changed: [testbed-node-2] 2026-03-13 01:12:53.982704 | orchestrator | 2026-03-13 01:12:53.982707 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-03-13 01:12:53.982710 | orchestrator | Friday 13 March 2026 01:10:24 +0000 (0:00:01.198) 0:01:49.445 ********** 2026-03-13 01:12:53.982713 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:12:53.982722 | orchestrator | changed: [testbed-node-1] 2026-03-13 01:12:53.982727 | orchestrator | changed: [testbed-node-2] 2026-03-13 01:12:53.982732 | orchestrator | 2026-03-13 01:12:53.982737 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-03-13 01:12:53.982742 | orchestrator | Friday 13 March 2026 01:10:25 +0000 (0:00:01.119) 0:01:50.565 ********** 2026-03-13 01:12:53.982747 | orchestrator | changed: [testbed-node-1] 2026-03-13 01:12:53.982751 | orchestrator | changed: [testbed-node-2] 2026-03-13 01:12:53.982756 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:12:53.982761 | orchestrator | 2026-03-13 01:12:53.982785 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-03-13 01:12:53.982792 | orchestrator | Friday 13 March 2026 01:10:27 +0000 (0:00:01.668) 0:01:52.233 ********** 2026-03-13 01:12:53.982795 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:12:53.982800 | orchestrator | changed: [testbed-node-1] 2026-03-13 01:12:53.982805 | orchestrator | changed: [testbed-node-2] 2026-03-13 01:12:53.982810 | orchestrator | 2026-03-13 01:12:53.982815 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-03-13 01:12:53.982819 | orchestrator | Friday 13 March 2026 01:10:28 +0000 (0:00:01.643) 0:01:53.876 ********** 2026-03-13 01:12:53.982825 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:12:53.982835 | orchestrator | ok: [testbed-node-1] 2026-03-13 01:12:53.982840 | orchestrator | ok: [testbed-node-2] 2026-03-13 01:12:53.982845 | orchestrator | 2026-03-13 01:12:53.982849 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-03-13 01:12:53.982852 | orchestrator | Friday 13 March 2026 01:10:29 +0000 (0:00:00.712) 0:01:54.589 ********** 2026-03-13 01:12:53.982855 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:12:53.982858 | orchestrator | ok: [testbed-node-1] 2026-03-13 01:12:53.982861 | orchestrator | ok: [testbed-node-2] 2026-03-13 01:12:53.982864 | orchestrator | 2026-03-13 01:12:53.982867 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-13 01:12:53.982870 | orchestrator | Friday 13 March 2026 01:10:31 +0000 (0:00:02.344) 0:01:56.933 ********** 2026-03-13 01:12:53.982873 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 01:12:53.982876 | orchestrator | 2026-03-13 01:12:53.982879 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-03-13 01:12:53.982883 | orchestrator | Friday 13 March 2026 01:10:32 +0000 (0:00:00.613) 0:01:57.547 ********** 2026-03-13 01:12:53.982888 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:12:53.982891 | orchestrator | 2026-03-13 01:12:53.982896 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-13 01:12:53.982903 | orchestrator | Friday 13 March 2026 01:10:35 +0000 (0:00:03.559) 0:02:01.106 ********** 2026-03-13 01:12:53.982909 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:12:53.982914 | orchestrator | 2026-03-13 01:12:53.982919 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-03-13 01:12:53.982924 | orchestrator | Friday 13 March 2026 01:10:38 +0000 (0:00:03.054) 0:02:04.161 ********** 2026-03-13 01:12:53.982929 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-13 01:12:53.982933 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-13 01:12:53.982938 | orchestrator | 2026-03-13 01:12:53.982943 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-03-13 01:12:53.982947 | orchestrator | Friday 13 March 2026 01:10:45 +0000 (0:00:06.474) 0:02:10.635 ********** 2026-03-13 01:12:53.982952 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:12:53.982957 | orchestrator | 2026-03-13 01:12:53.982963 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-03-13 01:12:53.982980 | orchestrator | Friday 13 March 2026 01:10:48 +0000 (0:00:03.239) 0:02:13.875 ********** 2026-03-13 01:12:53.982985 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:12:53.982989 | orchestrator | ok: [testbed-node-1] 2026-03-13 01:12:53.982994 | orchestrator | ok: [testbed-node-2] 2026-03-13 01:12:53.982999 | orchestrator | 2026-03-13 01:12:53.983005 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-03-13 01:12:53.983008 | orchestrator | Friday 13 March 2026 01:10:49 +0000 (0:00:00.325) 0:02:14.201 ********** 2026-03-13 01:12:53.983013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-13 01:12:53.983039 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-13 01:12:53.983046 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-13 01:12:53.983050 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-13 01:12:53.983055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-13 01:12:53.983058 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-13 01:12:53.983062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-13 01:12:53.983067 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-13 01:12:53.983083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-13 01:12:53.983087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-13 01:12:53.983093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-13 01:12:53.983098 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-13 01:12:53.983101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-13 01:12:53.983106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-13 01:12:53.983118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-13 01:12:53.983126 | orchestrator | 2026-03-13 01:12:53.983131 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-03-13 01:12:53.983136 | orchestrator | Friday 13 March 2026 01:10:51 +0000 (0:00:02.331) 0:02:16.533 ********** 2026-03-13 01:12:53.983141 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:12:53.983146 | orchestrator | 2026-03-13 01:12:53.983166 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-03-13 01:12:53.983171 | orchestrator | Friday 13 March 2026 01:10:51 +0000 (0:00:00.136) 0:02:16.670 ********** 2026-03-13 01:12:53.983174 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:12:53.983177 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:12:53.983180 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:12:53.983183 | orchestrator | 2026-03-13 01:12:53.983189 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-03-13 01:12:53.983193 | orchestrator | Friday 13 March 2026 01:10:51 +0000 (0:00:00.467) 0:02:17.137 ********** 2026-03-13 01:12:53.983199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-13 01:12:53.983206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-13 01:12:53.983211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-13 01:12:53.983216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-13 01:12:53.983226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-13 01:12:53.983229 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:12:53.983246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-13 01:12:53.983291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-13 01:12:53.983297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-13 01:12:53.983303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-13 01:12:53.983308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-13 01:12:53.983320 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:12:53.983328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-13 01:12:53.983350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-13 01:12:53.983357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-13 01:12:53.983362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-13 01:12:53.983368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-13 01:12:53.983373 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:12:53.983384 | orchestrator | 2026-03-13 01:12:53.983387 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-13 01:12:53.983391 | orchestrator | Friday 13 March 2026 01:10:52 +0000 (0:00:00.679) 0:02:17.816 ********** 2026-03-13 01:12:53.983394 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 01:12:53.983397 | orchestrator | 2026-03-13 01:12:53.983400 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-03-13 01:12:53.983403 | orchestrator | Friday 13 March 2026 01:10:53 +0000 (0:00:00.601) 0:02:18.418 ********** 2026-03-13 01:12:53.983406 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-13 01:12:53.983424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-13 01:12:53.983428 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-13 01:12:53.983431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-13 01:12:53.983435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-13 01:12:53.983441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-13 01:12:53.983444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-13 01:12:53.983449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-13 01:12:53.983455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-13 01:12:53.983458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-13 01:12:53.983462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-13 01:12:53.983467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-13 01:12:53.983470 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-13 01:12:53.983473 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-13 01:12:53.983482 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-13 01:12:53.983486 | orchestrator | 2026-03-13 01:12:53.983489 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-03-13 01:12:53.983492 | orchestrator | Friday 13 March 2026 01:10:59 +0000 (0:00:06.523) 0:02:24.942 ********** 2026-03-13 01:12:53.983495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-13 01:12:53.983499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-13 01:12:53.983504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-13 01:12:53.983507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-13 01:12:53.983510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-13 01:12:53.983514 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:12:53.983521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-13 01:12:53.983525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-13 01:12:53.983528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-13 01:12:53.983533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-13 01:12:53.983536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-13 01:12:53.983539 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:12:53.983543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-13 01:12:53.983548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-13 01:12:53.983553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-13 01:12:53.983557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-13 01:12:53.983562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-13 01:12:53.983565 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:12:53.983568 | orchestrator | 2026-03-13 01:12:53.983571 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-03-13 01:12:53.983574 | orchestrator | Friday 13 March 2026 01:11:00 +0000 (0:00:01.060) 0:02:26.002 ********** 2026-03-13 01:12:53.983578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-13 01:12:53.983581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-13 01:12:53.983586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-13 01:12:53.983591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-13 01:12:53.983594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-13 01:12:53.983603 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:12:53.983606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-13 01:12:53.983609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-13 01:12:53.983613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-13 01:12:53.983619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-13 01:12:53.983625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-13 01:12:53.983628 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:12:53.983633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-13 01:12:53.983636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-13 01:12:53.983640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-13 01:12:53.983643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-13 01:12:53.983646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-13 01:12:53.983649 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:12:53.983653 | orchestrator | 2026-03-13 01:12:53.983657 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-03-13 01:12:53.983661 | orchestrator | Friday 13 March 2026 01:11:02 +0000 (0:00:01.287) 0:02:27.289 ********** 2026-03-13 01:12:53.983667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-13 01:12:53.983672 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-13 01:12:53.983675 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-13 01:12:53.983678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-13 01:12:53.983681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-13 01:12:53.983686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-13 01:12:53.983692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-13 01:12:53.983697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-13 01:12:53.983701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-13 01:12:53.983704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-13 01:12:53.983707 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-13 01:12:53.983710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-13 01:12:53.983717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-13 01:12:53.983723 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-13 01:12:53.983726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-13 01:12:53.983729 | orchestrator | 2026-03-13 01:12:53.983732 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-03-13 01:12:53.983735 | orchestrator | Friday 13 March 2026 01:11:06 +0000 (0:00:04.678) 0:02:31.968 ********** 2026-03-13 01:12:53.983739 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-13 01:12:53.983742 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-13 01:12:53.983745 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-13 01:12:53.983748 | orchestrator | 2026-03-13 01:12:53.983751 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-03-13 01:12:53.983754 | orchestrator | Friday 13 March 2026 01:11:08 +0000 (0:00:01.704) 0:02:33.672 ********** 2026-03-13 01:12:53.983759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-13 01:12:53.983766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-13 01:12:53.983780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-13 01:12:53.983788 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-13 01:12:53.983793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-13 01:12:53.983799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-13 01:12:53.983804 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-13 01:12:53.983809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-13 01:12:53.983820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-13 01:12:53.983828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-13 01:12:53.983835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-13 01:12:53.983840 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-13 01:12:53.983846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-13 01:12:53.983851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-13 01:12:53.983856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-13 01:12:53.983864 | orchestrator | 2026-03-13 01:12:53.983869 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-03-13 01:12:53.983874 | orchestrator | Friday 13 March 2026 01:11:25 +0000 (0:00:17.246) 0:02:50.919 ********** 2026-03-13 01:12:53.983879 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:12:53.983885 | orchestrator | changed: [testbed-node-1] 2026-03-13 01:12:53.983890 | orchestrator | changed: [testbed-node-2] 2026-03-13 01:12:53.983893 | orchestrator | 2026-03-13 01:12:53.983896 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-03-13 01:12:53.983899 | orchestrator | Friday 13 March 2026 01:11:27 +0000 (0:00:01.472) 0:02:52.391 ********** 2026-03-13 01:12:53.983902 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-13 01:12:53.983905 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-13 01:12:53.983910 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-13 01:12:53.983914 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-13 01:12:53.983918 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-13 01:12:53.983923 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-13 01:12:53.983930 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-13 01:12:53.983935 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-13 01:12:53.983940 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-13 01:12:53.983945 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-13 01:12:53.983950 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-13 01:12:53.983955 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-13 01:12:53.983960 | orchestrator | 2026-03-13 01:12:53.983994 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-03-13 01:12:53.984000 | orchestrator | Friday 13 March 2026 01:11:31 +0000 (0:00:04.540) 0:02:56.932 ********** 2026-03-13 01:12:53.984006 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-13 01:12:53.984009 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-13 01:12:53.984012 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-13 01:12:53.984015 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-13 01:12:53.984018 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-13 01:12:53.984022 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-13 01:12:53.984027 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-13 01:12:53.984033 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-13 01:12:53.984041 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-13 01:12:53.984046 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-13 01:12:53.984052 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-13 01:12:53.984057 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-13 01:12:53.984061 | orchestrator | 2026-03-13 01:12:53.984066 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-03-13 01:12:53.984070 | orchestrator | Friday 13 March 2026 01:11:37 +0000 (0:00:05.274) 0:03:02.207 ********** 2026-03-13 01:12:53.984075 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-13 01:12:53.984080 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-13 01:12:53.984085 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-13 01:12:53.984095 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-13 01:12:53.984099 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-13 01:12:53.984102 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-13 01:12:53.984105 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-13 01:12:53.984108 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-13 01:12:53.984111 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-13 01:12:53.984114 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-13 01:12:53.984117 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-13 01:12:53.984120 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-13 01:12:53.984123 | orchestrator | 2026-03-13 01:12:53.984126 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-03-13 01:12:53.984129 | orchestrator | Friday 13 March 2026 01:11:41 +0000 (0:00:04.498) 0:03:06.705 ********** 2026-03-13 01:12:53.984133 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-13 01:12:53.984144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-13 01:12:53.984148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-13 01:12:53.984151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-13 01:12:53.984157 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-13 01:12:53.984160 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-13 01:12:53.984163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-13 01:12:53.984170 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-13 01:12:53.984174 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-13 01:12:53.984177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-13 01:12:53.984182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-13 01:12:53.984185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-13 01:12:53.984189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-13 01:12:53.984192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-13 01:12:53.984200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-13 01:12:53.984204 | orchestrator | 2026-03-13 01:12:53.984207 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-13 01:12:53.984211 | orchestrator | Friday 13 March 2026 01:11:45 +0000 (0:00:03.533) 0:03:10.238 ********** 2026-03-13 01:12:53.984216 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:12:53.984221 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:12:53.984226 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:12:53.984231 | orchestrator | 2026-03-13 01:12:53.984236 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-03-13 01:12:53.984241 | orchestrator | Friday 13 March 2026 01:11:45 +0000 (0:00:00.676) 0:03:10.917 ********** 2026-03-13 01:12:53.984246 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:12:53.984251 | orchestrator | 2026-03-13 01:12:53.984256 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-03-13 01:12:53.984261 | orchestrator | Friday 13 March 2026 01:11:48 +0000 (0:00:02.753) 0:03:13.670 ********** 2026-03-13 01:12:53.984269 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:12:53.984275 | orchestrator | 2026-03-13 01:12:53.984280 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-03-13 01:12:53.984285 | orchestrator | Friday 13 March 2026 01:11:51 +0000 (0:00:02.691) 0:03:16.362 ********** 2026-03-13 01:12:53.984289 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:12:53.984295 | orchestrator | 2026-03-13 01:12:53.984299 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-03-13 01:12:53.984305 | orchestrator | Friday 13 March 2026 01:11:53 +0000 (0:00:02.169) 0:03:18.531 ********** 2026-03-13 01:12:53.984309 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:12:53.984316 | orchestrator | 2026-03-13 01:12:53.984322 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-03-13 01:12:53.984329 | orchestrator | Friday 13 March 2026 01:11:55 +0000 (0:00:02.564) 0:03:21.095 ********** 2026-03-13 01:12:53.984334 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:12:53.984339 | orchestrator | 2026-03-13 01:12:53.984344 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-13 01:12:53.984349 | orchestrator | Friday 13 March 2026 01:12:15 +0000 (0:00:19.944) 0:03:41.040 ********** 2026-03-13 01:12:53.984353 | orchestrator | 2026-03-13 01:12:53.984358 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-13 01:12:53.984364 | orchestrator | Friday 13 March 2026 01:12:15 +0000 (0:00:00.067) 0:03:41.107 ********** 2026-03-13 01:12:53.984369 | orchestrator | 2026-03-13 01:12:53.984374 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-13 01:12:53.984380 | orchestrator | Friday 13 March 2026 01:12:15 +0000 (0:00:00.062) 0:03:41.170 ********** 2026-03-13 01:12:53.984383 | orchestrator | 2026-03-13 01:12:53.984386 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-03-13 01:12:53.984389 | orchestrator | Friday 13 March 2026 01:12:16 +0000 (0:00:00.077) 0:03:41.248 ********** 2026-03-13 01:12:53.984392 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:12:53.984395 | orchestrator | changed: [testbed-node-2] 2026-03-13 01:12:53.984398 | orchestrator | changed: [testbed-node-1] 2026-03-13 01:12:53.984402 | orchestrator | 2026-03-13 01:12:53.984405 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-03-13 01:12:53.984408 | orchestrator | Friday 13 March 2026 01:12:29 +0000 (0:00:13.850) 0:03:55.098 ********** 2026-03-13 01:12:53.984411 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:12:53.984414 | orchestrator | changed: [testbed-node-1] 2026-03-13 01:12:53.984417 | orchestrator | changed: [testbed-node-2] 2026-03-13 01:12:53.984420 | orchestrator | 2026-03-13 01:12:53.984423 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-03-13 01:12:53.984426 | orchestrator | Friday 13 March 2026 01:12:35 +0000 (0:00:06.062) 0:04:01.160 ********** 2026-03-13 01:12:53.984429 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:12:53.984432 | orchestrator | changed: [testbed-node-1] 2026-03-13 01:12:53.984435 | orchestrator | changed: [testbed-node-2] 2026-03-13 01:12:53.984438 | orchestrator | 2026-03-13 01:12:53.984441 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-03-13 01:12:53.984444 | orchestrator | Friday 13 March 2026 01:12:41 +0000 (0:00:05.158) 0:04:06.318 ********** 2026-03-13 01:12:53.984447 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:12:53.984450 | orchestrator | changed: [testbed-node-1] 2026-03-13 01:12:53.984453 | orchestrator | changed: [testbed-node-2] 2026-03-13 01:12:53.984456 | orchestrator | 2026-03-13 01:12:53.984459 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-03-13 01:12:53.984462 | orchestrator | Friday 13 March 2026 01:12:46 +0000 (0:00:04.891) 0:04:11.209 ********** 2026-03-13 01:12:53.984465 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:12:53.984468 | orchestrator | changed: [testbed-node-1] 2026-03-13 01:12:53.984471 | orchestrator | changed: [testbed-node-2] 2026-03-13 01:12:53.984480 | orchestrator | 2026-03-13 01:12:53.984487 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 01:12:53.984493 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-13 01:12:53.984498 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-13 01:12:53.984506 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-13 01:12:53.984511 | orchestrator | 2026-03-13 01:12:53.984516 | orchestrator | 2026-03-13 01:12:53.984521 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 01:12:53.984526 | orchestrator | Friday 13 March 2026 01:12:51 +0000 (0:00:05.617) 0:04:16.827 ********** 2026-03-13 01:12:53.984535 | orchestrator | =============================================================================== 2026-03-13 01:12:53.984540 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 19.94s 2026-03-13 01:12:53.984545 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 17.25s 2026-03-13 01:12:53.984550 | orchestrator | octavia : Add rules for security groups -------------------------------- 14.80s 2026-03-13 01:12:53.984555 | orchestrator | octavia : Adding octavia related roles --------------------------------- 14.15s 2026-03-13 01:12:53.984558 | orchestrator | octavia : Restart octavia-api container -------------------------------- 13.85s 2026-03-13 01:12:53.984561 | orchestrator | octavia : Create security groups for octavia ---------------------------- 9.37s 2026-03-13 01:12:53.984564 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.24s 2026-03-13 01:12:53.984567 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 6.65s 2026-03-13 01:12:53.984570 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 6.52s 2026-03-13 01:12:53.984573 | orchestrator | octavia : Get security groups for octavia ------------------------------- 6.47s 2026-03-13 01:12:53.984576 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 6.06s 2026-03-13 01:12:53.984579 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 5.78s 2026-03-13 01:12:53.984584 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 5.62s 2026-03-13 01:12:53.984589 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.39s 2026-03-13 01:12:53.984595 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.27s 2026-03-13 01:12:53.984600 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 5.16s 2026-03-13 01:12:53.984605 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 5.03s 2026-03-13 01:12:53.984611 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.00s 2026-03-13 01:12:53.984616 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 4.89s 2026-03-13 01:12:53.984621 | orchestrator | octavia : Create loadbalancer management network ------------------------ 4.88s 2026-03-13 01:12:53.984626 | orchestrator | 2026-03-13 01:12:53 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-13 01:12:57.030217 | orchestrator | 2026-03-13 01:12:57 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-13 01:13:00.068798 | orchestrator | 2026-03-13 01:13:00 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-13 01:13:03.096351 | orchestrator | 2026-03-13 01:13:03 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-13 01:13:06.138878 | orchestrator | 2026-03-13 01:13:06 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-13 01:13:09.191279 | orchestrator | 2026-03-13 01:13:09 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-13 01:13:12.235957 | orchestrator | 2026-03-13 01:13:12 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-13 01:13:15.281391 | orchestrator | 2026-03-13 01:13:15 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-13 01:13:18.323482 | orchestrator | 2026-03-13 01:13:18 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-13 01:13:21.362186 | orchestrator | 2026-03-13 01:13:21 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-13 01:13:24.410770 | orchestrator | 2026-03-13 01:13:24 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-13 01:13:27.454225 | orchestrator | 2026-03-13 01:13:27 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-13 01:13:30.497091 | orchestrator | 2026-03-13 01:13:30 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-13 01:13:33.537334 | orchestrator | 2026-03-13 01:13:33 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-13 01:13:36.582912 | orchestrator | 2026-03-13 01:13:36 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-13 01:13:39.624515 | orchestrator | 2026-03-13 01:13:39 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-13 01:13:42.664586 | orchestrator | 2026-03-13 01:13:42 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-13 01:13:45.705682 | orchestrator | 2026-03-13 01:13:45 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-13 01:13:48.750183 | orchestrator | 2026-03-13 01:13:48 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-13 01:13:51.791886 | orchestrator | 2026-03-13 01:13:51 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-13 01:13:54.832063 | orchestrator | 2026-03-13 01:15:55.294343 | orchestrator | 2026-03-13 01:15:55.298229 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Fri Mar 13 01:15:55 UTC 2026 2026-03-13 01:15:55.298302 | orchestrator | 2026-03-13 01:15:55.709445 | orchestrator | ok: Runtime: 0:34:59.699693 2026-03-13 01:15:56.012640 | 2026-03-13 01:15:56.012828 | TASK [Bootstrap services] 2026-03-13 01:15:56.965076 | orchestrator | 2026-03-13 01:15:56.965246 | orchestrator | # BOOTSTRAP 2026-03-13 01:15:56.965264 | orchestrator | 2026-03-13 01:15:56.965273 | orchestrator | + set -e 2026-03-13 01:15:56.965279 | orchestrator | + echo 2026-03-13 01:15:56.965285 | orchestrator | + echo '# BOOTSTRAP' 2026-03-13 01:15:56.965293 | orchestrator | + echo 2026-03-13 01:15:56.965337 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-03-13 01:15:56.974613 | orchestrator | + set -e 2026-03-13 01:15:56.974711 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-03-13 01:16:01.377173 | orchestrator | 2026-03-13 01:16:01 | INFO  | It takes a moment until task 152dc0ab-f726-45bb-aeda-ac9903518cb8 (flavor-manager) has been started and output is visible here. 2026-03-13 01:16:08.712094 | orchestrator | 2026-03-13 01:16:04 | INFO  | Flavor SCS-1L-1 created 2026-03-13 01:16:08.712231 | orchestrator | 2026-03-13 01:16:04 | INFO  | Flavor SCS-1L-1-5 created 2026-03-13 01:16:08.712245 | orchestrator | 2026-03-13 01:16:04 | INFO  | Flavor SCS-1V-2 created 2026-03-13 01:16:08.712251 | orchestrator | 2026-03-13 01:16:04 | INFO  | Flavor SCS-1V-2-5 created 2026-03-13 01:16:08.712257 | orchestrator | 2026-03-13 01:16:04 | INFO  | Flavor SCS-1V-4 created 2026-03-13 01:16:08.712264 | orchestrator | 2026-03-13 01:16:05 | INFO  | Flavor SCS-1V-4-10 created 2026-03-13 01:16:08.712270 | orchestrator | 2026-03-13 01:16:05 | INFO  | Flavor SCS-1V-8 created 2026-03-13 01:16:08.712277 | orchestrator | 2026-03-13 01:16:05 | INFO  | Flavor SCS-1V-8-20 created 2026-03-13 01:16:08.712298 | orchestrator | 2026-03-13 01:16:05 | INFO  | Flavor SCS-2V-4 created 2026-03-13 01:16:08.712305 | orchestrator | 2026-03-13 01:16:05 | INFO  | Flavor SCS-2V-4-10 created 2026-03-13 01:16:08.712311 | orchestrator | 2026-03-13 01:16:05 | INFO  | Flavor SCS-2V-8 created 2026-03-13 01:16:08.712317 | orchestrator | 2026-03-13 01:16:06 | INFO  | Flavor SCS-2V-8-20 created 2026-03-13 01:16:08.712323 | orchestrator | 2026-03-13 01:16:06 | INFO  | Flavor SCS-2V-16 created 2026-03-13 01:16:08.712329 | orchestrator | 2026-03-13 01:16:06 | INFO  | Flavor SCS-2V-16-50 created 2026-03-13 01:16:08.712334 | orchestrator | 2026-03-13 01:16:06 | INFO  | Flavor SCS-4V-8 created 2026-03-13 01:16:08.712340 | orchestrator | 2026-03-13 01:16:06 | INFO  | Flavor SCS-4V-8-20 created 2026-03-13 01:16:08.712345 | orchestrator | 2026-03-13 01:16:06 | INFO  | Flavor SCS-4V-16 created 2026-03-13 01:16:08.712351 | orchestrator | 2026-03-13 01:16:06 | INFO  | Flavor SCS-4V-16-50 created 2026-03-13 01:16:08.712357 | orchestrator | 2026-03-13 01:16:07 | INFO  | Flavor SCS-4V-32 created 2026-03-13 01:16:08.712363 | orchestrator | 2026-03-13 01:16:07 | INFO  | Flavor SCS-4V-32-100 created 2026-03-13 01:16:08.712369 | orchestrator | 2026-03-13 01:16:07 | INFO  | Flavor SCS-8V-16 created 2026-03-13 01:16:08.712375 | orchestrator | 2026-03-13 01:16:07 | INFO  | Flavor SCS-8V-16-50 created 2026-03-13 01:16:08.712381 | orchestrator | 2026-03-13 01:16:07 | INFO  | Flavor SCS-8V-32 created 2026-03-13 01:16:08.712388 | orchestrator | 2026-03-13 01:16:07 | INFO  | Flavor SCS-8V-32-100 created 2026-03-13 01:16:08.712444 | orchestrator | 2026-03-13 01:16:07 | INFO  | Flavor SCS-16V-32 created 2026-03-13 01:16:08.712451 | orchestrator | 2026-03-13 01:16:07 | INFO  | Flavor SCS-16V-32-100 created 2026-03-13 01:16:08.712459 | orchestrator | 2026-03-13 01:16:08 | INFO  | Flavor SCS-2V-4-20s created 2026-03-13 01:16:08.712465 | orchestrator | 2026-03-13 01:16:08 | INFO  | Flavor SCS-4V-8-50s created 2026-03-13 01:16:08.712471 | orchestrator | 2026-03-13 01:16:08 | INFO  | Flavor SCS-4V-16-100s created 2026-03-13 01:16:08.712478 | orchestrator | 2026-03-13 01:16:08 | INFO  | Flavor SCS-8V-32-100s created 2026-03-13 01:16:11.050823 | orchestrator | 2026-03-13 01:16:11 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-03-13 01:16:21.061753 | orchestrator | 2026-03-13 01:16:21 | INFO  | Prepare task for execution of bootstrap-basic. 2026-03-13 01:16:21.129522 | orchestrator | 2026-03-13 01:16:21 | INFO  | Task b51d3a5f-1301-4c7d-b38b-f420f2f3dc89 (bootstrap-basic) was prepared for execution. 2026-03-13 01:16:21.129604 | orchestrator | 2026-03-13 01:16:21 | INFO  | It takes a moment until task b51d3a5f-1301-4c7d-b38b-f420f2f3dc89 (bootstrap-basic) has been started and output is visible here. 2026-03-13 01:17:06.667245 | orchestrator | 2026-03-13 01:17:06.668878 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-03-13 01:17:06.668896 | orchestrator | 2026-03-13 01:17:06.668904 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-13 01:17:06.668914 | orchestrator | Friday 13 March 2026 01:16:25 +0000 (0:00:00.075) 0:00:00.075 ********** 2026-03-13 01:17:06.668921 | orchestrator | ok: [localhost] 2026-03-13 01:17:06.668929 | orchestrator | 2026-03-13 01:17:06.668935 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-03-13 01:17:06.668942 | orchestrator | Friday 13 March 2026 01:16:27 +0000 (0:00:01.946) 0:00:02.021 ********** 2026-03-13 01:17:06.668951 | orchestrator | ok: [localhost] 2026-03-13 01:17:06.668958 | orchestrator | 2026-03-13 01:17:06.668964 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-03-13 01:17:06.668971 | orchestrator | Friday 13 March 2026 01:16:36 +0000 (0:00:08.639) 0:00:10.661 ********** 2026-03-13 01:17:06.668978 | orchestrator | changed: [localhost] 2026-03-13 01:17:06.668986 | orchestrator | 2026-03-13 01:17:06.668993 | orchestrator | TASK [Create public network] *************************************************** 2026-03-13 01:17:06.669001 | orchestrator | Friday 13 March 2026 01:16:43 +0000 (0:00:07.822) 0:00:18.483 ********** 2026-03-13 01:17:06.669008 | orchestrator | changed: [localhost] 2026-03-13 01:17:06.669016 | orchestrator | 2026-03-13 01:17:06.669027 | orchestrator | TASK [Set public network to default] ******************************************* 2026-03-13 01:17:06.669036 | orchestrator | Friday 13 March 2026 01:16:49 +0000 (0:00:05.792) 0:00:24.275 ********** 2026-03-13 01:17:06.669043 | orchestrator | changed: [localhost] 2026-03-13 01:17:06.669050 | orchestrator | 2026-03-13 01:17:06.669057 | orchestrator | TASK [Create public subnet] **************************************************** 2026-03-13 01:17:06.669065 | orchestrator | Friday 13 March 2026 01:16:55 +0000 (0:00:05.893) 0:00:30.169 ********** 2026-03-13 01:17:06.669072 | orchestrator | changed: [localhost] 2026-03-13 01:17:06.669079 | orchestrator | 2026-03-13 01:17:06.669087 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-03-13 01:17:06.669095 | orchestrator | Friday 13 March 2026 01:16:59 +0000 (0:00:03.776) 0:00:33.945 ********** 2026-03-13 01:17:06.669103 | orchestrator | changed: [localhost] 2026-03-13 01:17:06.669110 | orchestrator | 2026-03-13 01:17:06.669117 | orchestrator | TASK [Create manager role] ***************************************************** 2026-03-13 01:17:06.669142 | orchestrator | Friday 13 March 2026 01:17:02 +0000 (0:00:03.658) 0:00:37.604 ********** 2026-03-13 01:17:06.669150 | orchestrator | ok: [localhost] 2026-03-13 01:17:06.669157 | orchestrator | 2026-03-13 01:17:06.669164 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 01:17:06.669173 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-13 01:17:06.669182 | orchestrator | 2026-03-13 01:17:06.669188 | orchestrator | 2026-03-13 01:17:06.669195 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 01:17:06.669203 | orchestrator | Friday 13 March 2026 01:17:06 +0000 (0:00:03.498) 0:00:41.102 ********** 2026-03-13 01:17:06.669211 | orchestrator | =============================================================================== 2026-03-13 01:17:06.669218 | orchestrator | Get volume type LUKS ---------------------------------------------------- 8.64s 2026-03-13 01:17:06.669246 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.82s 2026-03-13 01:17:06.669251 | orchestrator | Set public network to default ------------------------------------------- 5.89s 2026-03-13 01:17:06.669256 | orchestrator | Create public network --------------------------------------------------- 5.79s 2026-03-13 01:17:06.669261 | orchestrator | Create public subnet ---------------------------------------------------- 3.78s 2026-03-13 01:17:06.669266 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.66s 2026-03-13 01:17:06.669271 | orchestrator | Create manager role ----------------------------------------------------- 3.50s 2026-03-13 01:17:06.669275 | orchestrator | Gathering Facts --------------------------------------------------------- 1.95s 2026-03-13 01:17:08.952303 | orchestrator | 2026-03-13 01:17:08 | INFO  | It takes a moment until task 3128eca3-1953-447a-9b2a-8a16d398a592 (image-manager) has been started and output is visible here. 2026-03-13 01:17:50.129472 | orchestrator | 2026-03-13 01:17:11 | INFO  | Processing image 'Cirros 0.6.2' 2026-03-13 01:17:50.129599 | orchestrator | 2026-03-13 01:17:11 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-03-13 01:17:50.129609 | orchestrator | 2026-03-13 01:17:11 | INFO  | Importing image Cirros 0.6.2 2026-03-13 01:17:50.129614 | orchestrator | 2026-03-13 01:17:11 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-13 01:17:50.129620 | orchestrator | 2026-03-13 01:17:14 | INFO  | Waiting for image to leave queued state... 2026-03-13 01:17:50.129625 | orchestrator | 2026-03-13 01:17:16 | INFO  | Waiting for import to complete... 2026-03-13 01:17:50.129630 | orchestrator | 2026-03-13 01:17:26 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-03-13 01:17:50.129634 | orchestrator | 2026-03-13 01:17:26 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-03-13 01:17:50.129639 | orchestrator | 2026-03-13 01:17:26 | INFO  | Setting internal_version = 0.6.2 2026-03-13 01:17:50.129643 | orchestrator | 2026-03-13 01:17:26 | INFO  | Setting image_original_user = cirros 2026-03-13 01:17:50.129648 | orchestrator | 2026-03-13 01:17:26 | INFO  | Adding tag os:cirros 2026-03-13 01:17:50.129652 | orchestrator | 2026-03-13 01:17:26 | INFO  | Setting property architecture: x86_64 2026-03-13 01:17:50.129656 | orchestrator | 2026-03-13 01:17:27 | INFO  | Setting property hw_disk_bus: scsi 2026-03-13 01:17:50.129660 | orchestrator | 2026-03-13 01:17:27 | INFO  | Setting property hw_rng_model: virtio 2026-03-13 01:17:50.129693 | orchestrator | 2026-03-13 01:17:27 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-13 01:17:50.129703 | orchestrator | 2026-03-13 01:17:27 | INFO  | Setting property hw_watchdog_action: reset 2026-03-13 01:17:50.129710 | orchestrator | 2026-03-13 01:17:27 | INFO  | Setting property hypervisor_type: qemu 2026-03-13 01:17:50.129732 | orchestrator | 2026-03-13 01:17:28 | INFO  | Setting property os_distro: cirros 2026-03-13 01:17:50.129746 | orchestrator | 2026-03-13 01:17:28 | INFO  | Setting property os_purpose: minimal 2026-03-13 01:17:50.129752 | orchestrator | 2026-03-13 01:17:28 | INFO  | Setting property replace_frequency: never 2026-03-13 01:17:50.129758 | orchestrator | 2026-03-13 01:17:28 | INFO  | Setting property uuid_validity: none 2026-03-13 01:17:50.129763 | orchestrator | 2026-03-13 01:17:28 | INFO  | Setting property provided_until: none 2026-03-13 01:17:50.129769 | orchestrator | 2026-03-13 01:17:29 | INFO  | Setting property image_description: Cirros 2026-03-13 01:17:50.129774 | orchestrator | 2026-03-13 01:17:29 | INFO  | Setting property image_name: Cirros 2026-03-13 01:17:50.129806 | orchestrator | 2026-03-13 01:17:29 | INFO  | Setting property internal_version: 0.6.2 2026-03-13 01:17:50.129813 | orchestrator | 2026-03-13 01:17:29 | INFO  | Setting property image_original_user: cirros 2026-03-13 01:17:50.129820 | orchestrator | 2026-03-13 01:17:30 | INFO  | Setting property os_version: 0.6.2 2026-03-13 01:17:50.129826 | orchestrator | 2026-03-13 01:17:30 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-13 01:17:50.129834 | orchestrator | 2026-03-13 01:17:30 | INFO  | Setting property image_build_date: 2023-05-30 2026-03-13 01:17:50.129841 | orchestrator | 2026-03-13 01:17:30 | INFO  | Checking status of 'Cirros 0.6.2' 2026-03-13 01:17:50.129847 | orchestrator | 2026-03-13 01:17:30 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-03-13 01:17:50.129859 | orchestrator | 2026-03-13 01:17:30 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-03-13 01:17:50.129864 | orchestrator | 2026-03-13 01:17:31 | INFO  | Processing image 'Cirros 0.6.3' 2026-03-13 01:17:50.129868 | orchestrator | 2026-03-13 01:17:31 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-03-13 01:17:50.129873 | orchestrator | 2026-03-13 01:17:31 | INFO  | Importing image Cirros 0.6.3 2026-03-13 01:17:50.129877 | orchestrator | 2026-03-13 01:17:31 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-13 01:17:50.129880 | orchestrator | 2026-03-13 01:17:31 | INFO  | Waiting for image to leave queued state... 2026-03-13 01:17:50.129885 | orchestrator | 2026-03-13 01:17:33 | INFO  | Waiting for import to complete... 2026-03-13 01:17:50.129905 | orchestrator | 2026-03-13 01:17:44 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-03-13 01:17:50.129911 | orchestrator | 2026-03-13 01:17:44 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-03-13 01:17:50.129917 | orchestrator | 2026-03-13 01:17:44 | INFO  | Setting internal_version = 0.6.3 2026-03-13 01:17:50.129923 | orchestrator | 2026-03-13 01:17:44 | INFO  | Setting image_original_user = cirros 2026-03-13 01:17:50.129929 | orchestrator | 2026-03-13 01:17:44 | INFO  | Adding tag os:cirros 2026-03-13 01:17:50.129935 | orchestrator | 2026-03-13 01:17:44 | INFO  | Setting property architecture: x86_64 2026-03-13 01:17:50.129942 | orchestrator | 2026-03-13 01:17:44 | INFO  | Setting property hw_disk_bus: scsi 2026-03-13 01:17:50.129948 | orchestrator | 2026-03-13 01:17:44 | INFO  | Setting property hw_rng_model: virtio 2026-03-13 01:17:50.129955 | orchestrator | 2026-03-13 01:17:45 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-13 01:17:50.129962 | orchestrator | 2026-03-13 01:17:45 | INFO  | Setting property hw_watchdog_action: reset 2026-03-13 01:17:50.129966 | orchestrator | 2026-03-13 01:17:45 | INFO  | Setting property hypervisor_type: qemu 2026-03-13 01:17:50.129970 | orchestrator | 2026-03-13 01:17:45 | INFO  | Setting property os_distro: cirros 2026-03-13 01:17:50.129974 | orchestrator | 2026-03-13 01:17:45 | INFO  | Setting property os_purpose: minimal 2026-03-13 01:17:50.129978 | orchestrator | 2026-03-13 01:17:45 | INFO  | Setting property replace_frequency: never 2026-03-13 01:17:50.129982 | orchestrator | 2026-03-13 01:17:46 | INFO  | Setting property uuid_validity: none 2026-03-13 01:17:50.129986 | orchestrator | 2026-03-13 01:17:46 | INFO  | Setting property provided_until: none 2026-03-13 01:17:50.129990 | orchestrator | 2026-03-13 01:17:46 | INFO  | Setting property image_description: Cirros 2026-03-13 01:17:50.130000 | orchestrator | 2026-03-13 01:17:46 | INFO  | Setting property image_name: Cirros 2026-03-13 01:17:50.130005 | orchestrator | 2026-03-13 01:17:47 | INFO  | Setting property internal_version: 0.6.3 2026-03-13 01:17:50.130009 | orchestrator | 2026-03-13 01:17:47 | INFO  | Setting property image_original_user: cirros 2026-03-13 01:17:50.130051 | orchestrator | 2026-03-13 01:17:47 | INFO  | Setting property os_version: 0.6.3 2026-03-13 01:17:50.130056 | orchestrator | 2026-03-13 01:17:48 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-13 01:17:50.130060 | orchestrator | 2026-03-13 01:17:48 | INFO  | Setting property image_build_date: 2024-09-26 2026-03-13 01:17:50.130065 | orchestrator | 2026-03-13 01:17:48 | INFO  | Checking status of 'Cirros 0.6.3' 2026-03-13 01:17:50.130069 | orchestrator | 2026-03-13 01:17:48 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-03-13 01:17:50.130073 | orchestrator | 2026-03-13 01:17:48 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-03-13 01:17:50.466863 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-03-13 01:17:52.756171 | orchestrator | 2026-03-13 01:17:52 | INFO  | date: 2026-03-12 2026-03-13 01:17:52.756452 | orchestrator | 2026-03-13 01:17:52 | INFO  | image: octavia-amphora-haproxy-2024.2.20260312.qcow2 2026-03-13 01:17:52.756487 | orchestrator | 2026-03-13 01:17:52 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260312.qcow2 2026-03-13 01:17:52.756497 | orchestrator | 2026-03-13 01:17:52 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260312.qcow2.CHECKSUM 2026-03-13 01:17:53.230803 | orchestrator | 2026-03-13 01:17:53 | INFO  | checksum: 3eb27dd0f3c95d1fcf722d2a6431a3bf6401473af803643587ad7e5b597d5eb8 2026-03-13 01:17:53.309988 | orchestrator | 2026-03-13 01:17:53 | INFO  | It takes a moment until task 2f0a8f20-12e1-468e-a3cc-ced13beefa1c (image-manager) has been started and output is visible here. 2026-03-13 01:18:51.783151 | orchestrator | 2026-03-13 01:17:55 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-03-12' 2026-03-13 01:18:51.783209 | orchestrator | 2026-03-13 01:17:55 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260312.qcow2: 200 2026-03-13 01:18:51.783216 | orchestrator | 2026-03-13 01:17:55 | INFO  | Importing image OpenStack Octavia Amphora 2026-03-12 2026-03-13 01:18:51.783221 | orchestrator | 2026-03-13 01:17:55 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260312.qcow2 2026-03-13 01:18:51.783226 | orchestrator | 2026-03-13 01:17:56 | INFO  | Waiting for import to complete... 2026-03-13 01:18:51.783231 | orchestrator | 2026-03-13 01:18:06 | INFO  | Waiting for import to complete... 2026-03-13 01:18:51.783235 | orchestrator | 2026-03-13 01:18:17 | INFO  | Waiting for import to complete... 2026-03-13 01:18:51.783239 | orchestrator | 2026-03-13 01:18:27 | INFO  | Waiting for import to complete... 2026-03-13 01:18:51.783243 | orchestrator | 2026-03-13 01:18:37 | INFO  | Waiting for import to complete... 2026-03-13 01:18:51.783249 | orchestrator | 2026-03-13 01:18:47 | INFO  | Import of 'OpenStack Octavia Amphora 2026-03-12' successfully completed, reloading images 2026-03-13 01:18:51.783253 | orchestrator | 2026-03-13 01:18:47 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2026-03-12' 2026-03-13 01:18:51.783270 | orchestrator | 2026-03-13 01:18:47 | INFO  | Setting internal_version = 2026-03-12 2026-03-13 01:18:51.783274 | orchestrator | 2026-03-13 01:18:47 | INFO  | Setting image_original_user = ubuntu 2026-03-13 01:18:51.783278 | orchestrator | 2026-03-13 01:18:47 | INFO  | Adding tag amphora 2026-03-13 01:18:51.783283 | orchestrator | 2026-03-13 01:18:48 | INFO  | Adding tag os:ubuntu 2026-03-13 01:18:51.783287 | orchestrator | 2026-03-13 01:18:48 | INFO  | Setting property architecture: x86_64 2026-03-13 01:18:51.783291 | orchestrator | 2026-03-13 01:18:48 | INFO  | Setting property hw_disk_bus: scsi 2026-03-13 01:18:51.783295 | orchestrator | 2026-03-13 01:18:48 | INFO  | Setting property hw_rng_model: virtio 2026-03-13 01:18:51.783299 | orchestrator | 2026-03-13 01:18:48 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-13 01:18:51.783303 | orchestrator | 2026-03-13 01:18:49 | INFO  | Setting property hw_watchdog_action: reset 2026-03-13 01:18:51.783307 | orchestrator | 2026-03-13 01:18:49 | INFO  | Setting property hypervisor_type: qemu 2026-03-13 01:18:51.783311 | orchestrator | 2026-03-13 01:18:49 | INFO  | Setting property os_distro: ubuntu 2026-03-13 01:18:51.783315 | orchestrator | 2026-03-13 01:18:49 | INFO  | Setting property replace_frequency: quarterly 2026-03-13 01:18:51.783319 | orchestrator | 2026-03-13 01:18:49 | INFO  | Setting property uuid_validity: last-1 2026-03-13 01:18:51.783323 | orchestrator | 2026-03-13 01:18:49 | INFO  | Setting property provided_until: none 2026-03-13 01:18:51.783327 | orchestrator | 2026-03-13 01:18:50 | INFO  | Setting property os_purpose: network 2026-03-13 01:18:51.783332 | orchestrator | 2026-03-13 01:18:50 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2026-03-13 01:18:51.783336 | orchestrator | 2026-03-13 01:18:50 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2026-03-13 01:18:51.783340 | orchestrator | 2026-03-13 01:18:50 | INFO  | Setting property internal_version: 2026-03-12 2026-03-13 01:18:51.783351 | orchestrator | 2026-03-13 01:18:50 | INFO  | Setting property image_original_user: ubuntu 2026-03-13 01:18:51.783356 | orchestrator | 2026-03-13 01:18:50 | INFO  | Setting property os_version: 2026-03-12 2026-03-13 01:18:51.783363 | orchestrator | 2026-03-13 01:18:51 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260312.qcow2 2026-03-13 01:18:51.783369 | orchestrator | 2026-03-13 01:18:51 | INFO  | Setting property image_build_date: 2026-03-12 2026-03-13 01:18:51.783377 | orchestrator | 2026-03-13 01:18:51 | INFO  | Checking status of 'OpenStack Octavia Amphora 2026-03-12' 2026-03-13 01:18:51.783382 | orchestrator | 2026-03-13 01:18:51 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2026-03-12' 2026-03-13 01:18:51.783388 | orchestrator | 2026-03-13 01:18:51 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-03-13 01:18:51.783397 | orchestrator | 2026-03-13 01:18:51 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-03-13 01:18:51.783415 | orchestrator | 2026-03-13 01:18:51 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-03-13 01:18:51.783422 | orchestrator | 2026-03-13 01:18:51 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-03-13 01:18:52.222571 | orchestrator | ok: Runtime: 0:02:55.523135 2026-03-13 01:18:52.254397 | 2026-03-13 01:18:52.254527 | TASK [Run checks] 2026-03-13 01:18:52.963475 | orchestrator | + set -e 2026-03-13 01:18:52.964343 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-13 01:18:52.964374 | orchestrator | ++ export INTERACTIVE=false 2026-03-13 01:18:52.964389 | orchestrator | ++ INTERACTIVE=false 2026-03-13 01:18:52.964399 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-13 01:18:52.964409 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-13 01:18:52.964419 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-13 01:18:52.964794 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-13 01:18:52.970843 | orchestrator | 2026-03-13 01:18:52.970905 | orchestrator | # CHECK 2026-03-13 01:18:52.970915 | orchestrator | 2026-03-13 01:18:52.970928 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-13 01:18:52.970941 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-13 01:18:52.970949 | orchestrator | + echo 2026-03-13 01:18:52.970957 | orchestrator | + echo '# CHECK' 2026-03-13 01:18:52.970964 | orchestrator | + echo 2026-03-13 01:18:52.970975 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-13 01:18:52.974006 | orchestrator | ++ semver latest 5.0.0 2026-03-13 01:18:53.031104 | orchestrator | 2026-03-13 01:18:53.031161 | orchestrator | ## Containers @ testbed-manager 2026-03-13 01:18:53.031173 | orchestrator | 2026-03-13 01:18:53.031187 | orchestrator | + [[ -1 -eq -1 ]] 2026-03-13 01:18:53.031194 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-13 01:18:53.031201 | orchestrator | + echo 2026-03-13 01:18:53.031209 | orchestrator | + echo '## Containers @ testbed-manager' 2026-03-13 01:18:53.031216 | orchestrator | + echo 2026-03-13 01:18:53.031224 | orchestrator | + osism container testbed-manager ps 2026-03-13 01:18:55.056801 | orchestrator | 2026-03-13 01:18:55 | INFO  | Creating empty known_hosts file: /share/known_hosts 2026-03-13 01:18:55.401337 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-13 01:18:55.401433 | orchestrator | b015255a05f1 registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_blackbox_exporter 2026-03-13 01:18:55.401448 | orchestrator | 0be861c91c8c registry.osism.tech/kolla/prometheus-alertmanager:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_alertmanager 2026-03-13 01:18:55.401455 | orchestrator | 15cf7b1a91dd registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2026-03-13 01:18:55.401465 | orchestrator | 05aba34c52cd registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_node_exporter 2026-03-13 01:18:55.401475 | orchestrator | 38eb100a0fae registry.osism.tech/kolla/prometheus-v2-server:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_server 2026-03-13 01:18:55.401483 | orchestrator | 9069bf7fda47 registry.osism.tech/osism/cephclient:reef "/usr/bin/dumb-init …" 19 minutes ago Up 18 minutes cephclient 2026-03-13 01:18:55.401490 | orchestrator | 088c22f4e441 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2026-03-13 01:18:55.401497 | orchestrator | f79692382e93 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2026-03-13 01:18:55.401524 | orchestrator | 5fe4dbb16cfe registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2026-03-13 01:18:55.401532 | orchestrator | ed0d5ed532d3 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 31 minutes ago Up 31 minutes (healthy) 80/tcp phpmyadmin 2026-03-13 01:18:55.401538 | orchestrator | 1817103ba48f registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 32 minutes ago Up 32 minutes openstackclient 2026-03-13 01:18:55.401545 | orchestrator | a77c3db4d874 registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 32 minutes ago Up 32 minutes (healthy) 8080/tcp homer 2026-03-13 01:18:55.401551 | orchestrator | 0fae0e4bef12 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 55 minutes ago Up 54 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2026-03-13 01:18:55.401568 | orchestrator | e6d45b64e721 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" 59 minutes ago Up 38 minutes (healthy) manager-inventory_reconciler-1 2026-03-13 01:18:55.401574 | orchestrator | fbea49e66db6 registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" 59 minutes ago Up 39 minutes (healthy) ceph-ansible 2026-03-13 01:18:55.401718 | orchestrator | 6798158ff743 registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" 59 minutes ago Up 39 minutes (healthy) osism-kubernetes 2026-03-13 01:18:55.401743 | orchestrator | 199c4b4da61a registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" 59 minutes ago Up 39 minutes (healthy) osism-ansible 2026-03-13 01:18:55.401749 | orchestrator | 2fbc45e74fbb registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" 59 minutes ago Up 39 minutes (healthy) kolla-ansible 2026-03-13 01:18:55.401755 | orchestrator | 07e7d5f972eb registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" 59 minutes ago Up 39 minutes (healthy) 8000/tcp manager-ara-server-1 2026-03-13 01:18:55.401762 | orchestrator | 81231982e7ac registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" 59 minutes ago Up 39 minutes (healthy) osismclient 2026-03-13 01:18:55.401769 | orchestrator | e49aa5a4888f registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 59 minutes ago Up 39 minutes (healthy) manager-beat-1 2026-03-13 01:18:55.401775 | orchestrator | 2042cdca0a0f registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" 59 minutes ago Up 39 minutes 192.168.16.5:3000->3000/tcp osism-frontend 2026-03-13 01:18:55.401781 | orchestrator | 6f3f9078b251 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 59 minutes ago Up 39 minutes (healthy) manager-listener-1 2026-03-13 01:18:55.401796 | orchestrator | 727a1fa9fa17 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 59 minutes ago Up 39 minutes (healthy) manager-flower-1 2026-03-13 01:18:55.401802 | orchestrator | 033a42c6dede registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" 59 minutes ago Up 39 minutes (healthy) 3306/tcp manager-mariadb-1 2026-03-13 01:18:55.401808 | orchestrator | c55bf2791b9a registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" 59 minutes ago Up 39 minutes (healthy) 6379/tcp manager-redis-1 2026-03-13 01:18:55.401814 | orchestrator | 02c577d48b67 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 59 minutes ago Up 39 minutes (healthy) manager-openstack-1 2026-03-13 01:18:55.401822 | orchestrator | 05d0fa256b60 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 59 minutes ago Up 39 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-03-13 01:18:55.401828 | orchestrator | 69b0919a7beb registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" About an hour ago Up About an hour (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-03-13 01:18:55.708165 | orchestrator | 2026-03-13 01:18:55.708262 | orchestrator | ## Images @ testbed-manager 2026-03-13 01:18:55.708273 | orchestrator | 2026-03-13 01:18:55.708281 | orchestrator | + echo 2026-03-13 01:18:55.708288 | orchestrator | + echo '## Images @ testbed-manager' 2026-03-13 01:18:55.708298 | orchestrator | + echo 2026-03-13 01:18:55.708308 | orchestrator | + osism container testbed-manager images 2026-03-13 01:18:58.110255 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-13 01:18:58.110371 | orchestrator | registry.osism.tech/osism/osism-ansible latest 3c82227097a8 About an hour ago 613MB 2026-03-13 01:18:58.110396 | orchestrator | registry.osism.tech/osism/kolla-ansible 2024.2 3de77f3ec904 About an hour ago 610MB 2026-03-13 01:18:58.110410 | orchestrator | registry.osism.tech/osism/osism latest 3376395f1aa5 About an hour ago 406MB 2026-03-13 01:18:58.110417 | orchestrator | registry.osism.tech/osism/ceph-ansible reef c0b375cdf34b About an hour ago 560MB 2026-03-13 01:18:58.110423 | orchestrator | registry.osism.tech/osism/osism-kubernetes latest bb68cd12c222 About an hour ago 1.22GB 2026-03-13 01:18:58.110429 | orchestrator | registry.osism.tech/osism/osism-frontend latest 2a698a7d7ce5 About an hour ago 232MB 2026-03-13 01:18:58.110436 | orchestrator | registry.osism.tech/osism/inventory-reconciler latest 0dc880827435 About an hour ago 335MB 2026-03-13 01:18:58.110443 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 a4bd7552232f 22 hours ago 239MB 2026-03-13 01:18:58.110450 | orchestrator | registry.osism.tech/osism/cephclient reef 19951ee12217 22 hours ago 453MB 2026-03-13 01:18:58.110457 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 2bb4ef6fcc06 23 hours ago 673MB 2026-03-13 01:18:58.110464 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 f30cbade480a 23 hours ago 584MB 2026-03-13 01:18:58.110470 | orchestrator | registry.osism.tech/kolla/cron 2024.2 41fd7166b66f 23 hours ago 271MB 2026-03-13 01:18:58.110477 | orchestrator | registry.osism.tech/kolla/prometheus-blackbox-exporter 2024.2 f5bb324d68f1 23 hours ago 313MB 2026-03-13 01:18:58.110504 | orchestrator | registry.osism.tech/kolla/prometheus-v2-server 2024.2 699c1870a391 23 hours ago 844MB 2026-03-13 01:18:58.110510 | orchestrator | registry.osism.tech/kolla/prometheus-alertmanager 2024.2 89aabd045b66 23 hours ago 409MB 2026-03-13 01:18:58.110528 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 eea1c336348d 23 hours ago 363MB 2026-03-13 01:18:58.110535 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 09085dea3626 23 hours ago 311MB 2026-03-13 01:18:58.110541 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine e08bd8d5a677 6 weeks ago 41.4MB 2026-03-13 01:18:58.110555 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 3 months ago 11.5MB 2026-03-13 01:18:58.110562 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 3 months ago 334MB 2026-03-13 01:18:58.110567 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 5 months ago 742MB 2026-03-13 01:18:58.110571 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 6 months ago 275MB 2026-03-13 01:18:58.110574 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 7 months ago 226MB 2026-03-13 01:18:58.110579 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 21 months ago 146MB 2026-03-13 01:18:58.410265 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-13 01:18:58.410828 | orchestrator | ++ semver latest 5.0.0 2026-03-13 01:18:58.458142 | orchestrator | 2026-03-13 01:18:58.458223 | orchestrator | ## Containers @ testbed-node-0 2026-03-13 01:18:58.458231 | orchestrator | 2026-03-13 01:18:58.458235 | orchestrator | + [[ -1 -eq -1 ]] 2026-03-13 01:18:58.458240 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-13 01:18:58.458244 | orchestrator | + echo 2026-03-13 01:18:58.458249 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-03-13 01:18:58.458254 | orchestrator | + echo 2026-03-13 01:18:58.458257 | orchestrator | + osism container testbed-node-0 ps 2026-03-13 01:19:00.954392 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-13 01:19:00.954444 | orchestrator | be82889df11f registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 6 minutes ago Up 6 minutes (healthy) octavia_worker 2026-03-13 01:19:00.954450 | orchestrator | 0b53e3f50b94 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 6 minutes ago Up 6 minutes (healthy) octavia_housekeeping 2026-03-13 01:19:00.954455 | orchestrator | 7d4d7f36cabf registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 6 minutes ago Up 6 minutes (healthy) octavia_health_manager 2026-03-13 01:19:00.954468 | orchestrator | 28cec5aa3695 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 6 minutes ago Up 6 minutes octavia_driver_agent 2026-03-13 01:19:00.954523 | orchestrator | eece381ebad8 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 6 minutes ago Up 6 minutes (healthy) octavia_api 2026-03-13 01:19:00.954530 | orchestrator | 2e3f3a016b6e registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_novncproxy 2026-03-13 01:19:00.954534 | orchestrator | f347d2b17bd4 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_conductor 2026-03-13 01:19:00.954538 | orchestrator | d0395b5485c2 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_api 2026-03-13 01:19:00.954564 | orchestrator | d2cf325b109f registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_scheduler 2026-03-13 01:19:00.954568 | orchestrator | 5db306024334 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes grafana 2026-03-13 01:19:00.954572 | orchestrator | ff65981e2753 registry.osism.tech/kolla/cinder-backup:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_backup 2026-03-13 01:19:00.954576 | orchestrator | 8e7fdeb64fc0 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) glance_api 2026-03-13 01:19:00.954580 | orchestrator | 5dfcd424b33d registry.osism.tech/kolla/cinder-volume:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_volume 2026-03-13 01:19:00.954584 | orchestrator | 21a2d09438e8 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_scheduler 2026-03-13 01:19:00.954588 | orchestrator | 8d126f70365e registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2026-03-13 01:19:00.954592 | orchestrator | 10a2f5b75625 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_api 2026-03-13 01:19:00.954596 | orchestrator | 646cdb13d188 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2026-03-13 01:19:00.954600 | orchestrator | 72bf605f7055 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_memcached_exporter 2026-03-13 01:19:00.954632 | orchestrator | ce7146231938 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_mysqld_exporter 2026-03-13 01:19:00.954636 | orchestrator | 3215220c1456 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_node_exporter 2026-03-13 01:19:00.954640 | orchestrator | 3d5c8c6c1d48 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_conductor 2026-03-13 01:19:00.954644 | orchestrator | 19a644ac0975 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_api 2026-03-13 01:19:00.954651 | orchestrator | a37d834b5eb6 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) placement_api 2026-03-13 01:19:00.954655 | orchestrator | 0757924354e8 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) neutron_server 2026-03-13 01:19:00.954658 | orchestrator | efb670593350 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_worker 2026-03-13 01:19:00.954665 | orchestrator | 31b7b80ff2bc registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_mdns 2026-03-13 01:19:00.954669 | orchestrator | d05073708433 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_producer 2026-03-13 01:19:00.954676 | orchestrator | 294f53b9381a registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_central 2026-03-13 01:19:00.954684 | orchestrator | 8e69250a7a74 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_api 2026-03-13 01:19:00.954688 | orchestrator | 9841c9d0ceaa registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_worker 2026-03-13 01:19:00.954692 | orchestrator | 76d48deccbbc registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_backend_bind9 2026-03-13 01:19:00.954695 | orchestrator | ffe0dec65090 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_keystone_listener 2026-03-13 01:19:00.954701 | orchestrator | 1b52d0f7022c registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 17 minutes ago Up 17 minutes ceph-mgr-testbed-node-0 2026-03-13 01:19:00.954708 | orchestrator | 613a3b561a04 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_api 2026-03-13 01:19:00.954714 | orchestrator | 98f63cae4b84 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone 2026-03-13 01:19:00.954724 | orchestrator | 697e88e19bd6 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone_fernet 2026-03-13 01:19:00.954731 | orchestrator | 05cb78cf0073 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone_ssh 2026-03-13 01:19:00.954738 | orchestrator | a0a7bc9a6608 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) horizon 2026-03-13 01:19:00.954745 | orchestrator | 21177e2eab14 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 22 minutes ago Up 22 minutes (healthy) mariadb 2026-03-13 01:19:00.954751 | orchestrator | 2d63c75eb673 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch_dashboards 2026-03-13 01:19:00.954758 | orchestrator | b574f0718486 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 24 minutes ago Up 24 minutes ceph-crash-testbed-node-0 2026-03-13 01:19:00.954764 | orchestrator | 3d601fcf3238 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) opensearch 2026-03-13 01:19:00.954770 | orchestrator | 8d578fa997a5 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes keepalived 2026-03-13 01:19:00.954777 | orchestrator | dfd64d88c318 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) proxysql 2026-03-13 01:19:00.954783 | orchestrator | 0454083a10b6 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) haproxy 2026-03-13 01:19:00.954789 | orchestrator | 73ad010a572c registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_northd 2026-03-13 01:19:00.954799 | orchestrator | 29db667d332d registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_sb_db 2026-03-13 01:19:00.954811 | orchestrator | ef3b49123e2b registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_nb_db 2026-03-13 01:19:00.954817 | orchestrator | 79dd92007179 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-0 2026-03-13 01:19:00.954824 | orchestrator | 78b56027e39f registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_controller 2026-03-13 01:19:00.954840 | orchestrator | 0dab8827fb90 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) rabbitmq 2026-03-13 01:19:00.954847 | orchestrator | a6fd18bbb4f3 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_vswitchd 2026-03-13 01:19:00.954854 | orchestrator | 7d5f02d7b369 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_db 2026-03-13 01:19:00.954861 | orchestrator | 47dc223e484a registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis_sentinel 2026-03-13 01:19:00.954868 | orchestrator | f2fa7d55a048 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis 2026-03-13 01:19:00.954875 | orchestrator | c740e5d4bdd8 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) memcached 2026-03-13 01:19:00.954883 | orchestrator | ce10b86d8eb2 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2026-03-13 01:19:00.954887 | orchestrator | 54e5b78167ff registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2026-03-13 01:19:00.954891 | orchestrator | e4c9a2b578c0 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2026-03-13 01:19:01.258600 | orchestrator | 2026-03-13 01:19:01.258662 | orchestrator | ## Images @ testbed-node-0 2026-03-13 01:19:01.258667 | orchestrator | 2026-03-13 01:19:01.258671 | orchestrator | + echo 2026-03-13 01:19:01.258674 | orchestrator | + echo '## Images @ testbed-node-0' 2026-03-13 01:19:01.258681 | orchestrator | + echo 2026-03-13 01:19:01.258685 | orchestrator | + osism container testbed-node-0 images 2026-03-13 01:19:03.635742 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-13 01:19:03.635797 | orchestrator | registry.osism.tech/osism/ceph-daemon reef c5b650628899 22 hours ago 1.27GB 2026-03-13 01:19:03.635811 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 2bb4ef6fcc06 23 hours ago 673MB 2026-03-13 01:19:03.635816 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 f826fd8c0fda 23 hours ago 328MB 2026-03-13 01:19:03.635820 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 f30cbade480a 23 hours ago 584MB 2026-03-13 01:19:03.635824 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 e90bd16cdce3 23 hours ago 1.04GB 2026-03-13 01:19:03.635828 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 80f500fd8171 23 hours ago 282MB 2026-03-13 01:19:03.635832 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 ab948246405c 23 hours ago 279MB 2026-03-13 01:19:03.635836 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 1e53dab52482 23 hours ago 272MB 2026-03-13 01:19:03.635840 | orchestrator | registry.osism.tech/kolla/cron 2024.2 41fd7166b66f 23 hours ago 271MB 2026-03-13 01:19:03.635854 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 5f9575c7de3a 23 hours ago 1.54GB 2026-03-13 01:19:03.635858 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 39e4bf85573b 23 hours ago 1.56GB 2026-03-13 01:19:03.635862 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 786a616e9f86 23 hours ago 422MB 2026-03-13 01:19:03.635866 | orchestrator | registry.osism.tech/kolla/redis 2024.2 79cbbadff70c 23 hours ago 278MB 2026-03-13 01:19:03.635869 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 7f675fae4e1f 23 hours ago 278MB 2026-03-13 01:19:03.635873 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 b39733124f89 23 hours ago 457MB 2026-03-13 01:19:03.635877 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 44ae16a78943 23 hours ago 1.15GB 2026-03-13 01:19:03.635881 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 9c6308dc8e8a 23 hours ago 284MB 2026-03-13 01:19:03.635884 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 e99035040be1 23 hours ago 284MB 2026-03-13 01:19:03.635888 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 0811a6e4bdc8 23 hours ago 304MB 2026-03-13 01:19:03.635892 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 05abc98b5a12 23 hours ago 306MB 2026-03-13 01:19:03.635896 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 542ab6787a61 23 hours ago 297MB 2026-03-13 01:19:03.635899 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 eea1c336348d 23 hours ago 363MB 2026-03-13 01:19:03.635903 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 09085dea3626 23 hours ago 311MB 2026-03-13 01:19:03.635907 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 4fca3028b700 23 hours ago 989MB 2026-03-13 01:19:03.635910 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 534376e957d8 23 hours ago 994MB 2026-03-13 01:19:03.635914 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 108470ab5c4b 23 hours ago 990MB 2026-03-13 01:19:03.635918 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 facafd96d906 23 hours ago 990MB 2026-03-13 01:19:03.635921 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 bce30dc0da1c 23 hours ago 990MB 2026-03-13 01:19:03.635925 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 e37b3fbb8361 23 hours ago 994MB 2026-03-13 01:19:03.635929 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 b2764349685c 23 hours ago 1.13GB 2026-03-13 01:19:03.635933 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 79decdcdb4d2 23 hours ago 1.25GB 2026-03-13 01:19:03.635937 | orchestrator | registry.osism.tech/kolla/ceilometer-notification 2024.2 bdfe6c6d9073 23 hours ago 981MB 2026-03-13 01:19:03.635941 | orchestrator | registry.osism.tech/kolla/ceilometer-central 2024.2 9d9d24b6b3f4 23 hours ago 982MB 2026-03-13 01:19:03.635944 | orchestrator | registry.osism.tech/kolla/cinder-volume 2024.2 1d7e6234cd5b 23 hours ago 1.72GB 2026-03-13 01:19:03.635948 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 9e5af04505af 23 hours ago 1.41GB 2026-03-13 01:19:03.635952 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 52f01491450e 23 hours ago 1.41GB 2026-03-13 01:19:03.635963 | orchestrator | registry.osism.tech/kolla/cinder-backup 2024.2 cc4f604334a9 23 hours ago 1.42GB 2026-03-13 01:19:03.635967 | orchestrator | registry.osism.tech/kolla/aodh-api 2024.2 d9b4e7b97e15 23 hours ago 979MB 2026-03-13 01:19:03.635971 | orchestrator | registry.osism.tech/kolla/aodh-evaluator 2024.2 cca6a3b9063b 23 hours ago 979MB 2026-03-13 01:19:03.635977 | orchestrator | registry.osism.tech/kolla/aodh-notifier 2024.2 93f4589e2111 23 hours ago 979MB 2026-03-13 01:19:03.635981 | orchestrator | registry.osism.tech/kolla/aodh-listener 2024.2 ca2eec0ba782 23 hours ago 979MB 2026-03-13 01:19:03.635985 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 a5f5e8233564 23 hours ago 1.17GB 2026-03-13 01:19:03.635989 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 c378e867efd2 23 hours ago 996MB 2026-03-13 01:19:03.635992 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 5624aed8931e 23 hours ago 996MB 2026-03-13 01:19:03.635996 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 9db286885af5 23 hours ago 996MB 2026-03-13 01:19:03.636000 | orchestrator | registry.osism.tech/kolla/skyline-console 2024.2 ebe1f56c178c 23 hours ago 1.05GB 2026-03-13 01:19:03.636003 | orchestrator | registry.osism.tech/kolla/skyline-apiserver 2024.2 e7d1493b165b 23 hours ago 995MB 2026-03-13 01:19:03.636007 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 17ff8652a341 23 hours ago 1.05GB 2026-03-13 01:19:03.636011 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 3de0708ce8c1 23 hours ago 1.07GB 2026-03-13 01:19:03.636015 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 f82f3bd15b51 23 hours ago 1.04GB 2026-03-13 01:19:03.636018 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 e07b0e418945 23 hours ago 1.06GB 2026-03-13 01:19:03.636022 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 1d669a84dd80 23 hours ago 1.03GB 2026-03-13 01:19:03.636026 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 d2b31fc919ac 23 hours ago 1.03GB 2026-03-13 01:19:03.636030 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 b9eb1b71fda7 23 hours ago 1.06GB 2026-03-13 01:19:03.636033 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 c013a6f84cbf 23 hours ago 1.03GB 2026-03-13 01:19:03.636037 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 335605c25420 24 hours ago 1.22GB 2026-03-13 01:19:03.636041 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 1b5ddcaeca26 24 hours ago 1.22GB 2026-03-13 01:19:03.636045 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 4bff38baab68 24 hours ago 1.22GB 2026-03-13 01:19:03.636051 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 9385a76b5b24 24 hours ago 1.37GB 2026-03-13 01:19:03.636055 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 0a193069d16e 24 hours ago 981MB 2026-03-13 01:19:03.636059 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 351f367ed5b4 24 hours ago 1.1GB 2026-03-13 01:19:03.636063 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 7c8095c3db60 24 hours ago 846MB 2026-03-13 01:19:03.636075 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 f51841b0582e 24 hours ago 846MB 2026-03-13 01:19:03.636078 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 357485de29b2 24 hours ago 846MB 2026-03-13 01:19:03.636082 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 4007e5413f47 24 hours ago 846MB 2026-03-13 01:19:03.913135 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-13 01:19:03.913666 | orchestrator | ++ semver latest 5.0.0 2026-03-13 01:19:03.971548 | orchestrator | 2026-03-13 01:19:03.971635 | orchestrator | ## Containers @ testbed-node-1 2026-03-13 01:19:03.971642 | orchestrator | 2026-03-13 01:19:03.971646 | orchestrator | + [[ -1 -eq -1 ]] 2026-03-13 01:19:03.971650 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-13 01:19:03.971654 | orchestrator | + echo 2026-03-13 01:19:03.971673 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-03-13 01:19:03.971678 | orchestrator | + echo 2026-03-13 01:19:03.971682 | orchestrator | + osism container testbed-node-1 ps 2026-03-13 01:19:06.380489 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-13 01:19:06.380544 | orchestrator | f8b094951c71 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 6 minutes ago Up 6 minutes (healthy) octavia_worker 2026-03-13 01:19:06.380552 | orchestrator | 72674888ec1d registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 6 minutes ago Up 6 minutes (healthy) octavia_housekeeping 2026-03-13 01:19:06.380558 | orchestrator | 91ee1a2ccd2f registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 6 minutes ago Up 6 minutes (healthy) octavia_health_manager 2026-03-13 01:19:06.380563 | orchestrator | 660c6ab0b77d registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 6 minutes ago Up 6 minutes octavia_driver_agent 2026-03-13 01:19:06.380569 | orchestrator | 9a38efeee560 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 6 minutes ago Up 6 minutes (healthy) octavia_api 2026-03-13 01:19:06.380574 | orchestrator | 39c98d2b42a5 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_novncproxy 2026-03-13 01:19:06.380579 | orchestrator | 33e42f29bf8d registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_conductor 2026-03-13 01:19:06.380585 | orchestrator | 0920853f08ed registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_api 2026-03-13 01:19:06.380592 | orchestrator | 2fee86db049c registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes grafana 2026-03-13 01:19:06.380601 | orchestrator | 0fd7ccf6491c registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_scheduler 2026-03-13 01:19:06.380636 | orchestrator | 1929bfea08fb registry.osism.tech/kolla/cinder-backup:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_backup 2026-03-13 01:19:06.380643 | orchestrator | 792704f734e7 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) glance_api 2026-03-13 01:19:06.380648 | orchestrator | d7d3f8d61444 registry.osism.tech/kolla/cinder-volume:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_volume 2026-03-13 01:19:06.380653 | orchestrator | 6e0bd2028670 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_scheduler 2026-03-13 01:19:06.380668 | orchestrator | fb572631359d registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_api 2026-03-13 01:19:06.380673 | orchestrator | ed147e77ebd3 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2026-03-13 01:19:06.380679 | orchestrator | 78483f22acf5 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2026-03-13 01:19:06.380684 | orchestrator | b32b31197a6d registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_memcached_exporter 2026-03-13 01:19:06.380701 | orchestrator | 53c7f4fbe835 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_mysqld_exporter 2026-03-13 01:19:06.380706 | orchestrator | e8673997702c registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2026-03-13 01:19:06.380711 | orchestrator | ececdbd76656 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_conductor 2026-03-13 01:19:06.380726 | orchestrator | 35d70a744128 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_api 2026-03-13 01:19:06.380731 | orchestrator | 4efa12def068 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) neutron_server 2026-03-13 01:19:06.380736 | orchestrator | 10d32d20ff62 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) placement_api 2026-03-13 01:19:06.380741 | orchestrator | 6b8d088590e0 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_worker 2026-03-13 01:19:06.380746 | orchestrator | cdc2787788a2 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_mdns 2026-03-13 01:19:06.380751 | orchestrator | c44afed49be5 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 17 minutes ago Up 16 minutes (healthy) designate_producer 2026-03-13 01:19:06.380756 | orchestrator | 168b20125ef6 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_central 2026-03-13 01:19:06.380761 | orchestrator | bb01f6000cc0 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_api 2026-03-13 01:19:06.380766 | orchestrator | 4459245f546b registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_worker 2026-03-13 01:19:06.380771 | orchestrator | a2506eb747f1 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 17 minutes ago Up 17 minutes ceph-mgr-testbed-node-1 2026-03-13 01:19:06.380776 | orchestrator | 9fa727a3d296 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_backend_bind9 2026-03-13 01:19:06.380781 | orchestrator | 6e24db994d7d registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_keystone_listener 2026-03-13 01:19:06.380786 | orchestrator | 75d40df192a4 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_api 2026-03-13 01:19:06.380791 | orchestrator | 232d8a373ab1 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone 2026-03-13 01:19:06.381074 | orchestrator | c1120caab5bd registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone_fernet 2026-03-13 01:19:06.381119 | orchestrator | 19c0f6773515 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) horizon 2026-03-13 01:19:06.381132 | orchestrator | 7d85e7ea2320 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone_ssh 2026-03-13 01:19:06.381147 | orchestrator | 981fa0912903 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch_dashboards 2026-03-13 01:19:06.381153 | orchestrator | 89d1cdad347e registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 23 minutes ago Up 23 minutes (healthy) mariadb 2026-03-13 01:19:06.381158 | orchestrator | 74ac2cad87ab registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch 2026-03-13 01:19:06.381164 | orchestrator | a17de27e7fdd registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 24 minutes ago Up 24 minutes ceph-crash-testbed-node-1 2026-03-13 01:19:06.381169 | orchestrator | ee1cc31f7780 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes keepalived 2026-03-13 01:19:06.381175 | orchestrator | bd819005e096 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) proxysql 2026-03-13 01:19:06.381181 | orchestrator | 2d95c0ba1834 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) haproxy 2026-03-13 01:19:06.381186 | orchestrator | 0a478b5bb33a registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 28 minutes ago Up 27 minutes ovn_northd 2026-03-13 01:19:06.381191 | orchestrator | 6904b3b678ac registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 28 minutes ago Up 27 minutes ovn_sb_db 2026-03-13 01:19:06.381209 | orchestrator | 21e9d6aa00a5 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_nb_db 2026-03-13 01:19:06.381215 | orchestrator | 49db89e4ffa7 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-1 2026-03-13 01:19:06.381220 | orchestrator | ad18df3b49ab registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_controller 2026-03-13 01:19:06.381225 | orchestrator | 37d6f7f6c3d8 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) rabbitmq 2026-03-13 01:19:06.381231 | orchestrator | e8320b87dcaa registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_vswitchd 2026-03-13 01:19:06.381237 | orchestrator | e452ccae0a30 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_db 2026-03-13 01:19:06.381243 | orchestrator | db4560ce20a2 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis_sentinel 2026-03-13 01:19:06.381248 | orchestrator | 3e74a714e8ba registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis 2026-03-13 01:19:06.381254 | orchestrator | 803b2aea9902 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) memcached 2026-03-13 01:19:06.381259 | orchestrator | d3c88a8d0def registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes cron 2026-03-13 01:19:06.381265 | orchestrator | 7775fe4e6c27 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2026-03-13 01:19:06.381274 | orchestrator | 1c8e54f12428 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2026-03-13 01:19:06.676185 | orchestrator | 2026-03-13 01:19:06.676233 | orchestrator | ## Images @ testbed-node-1 2026-03-13 01:19:06.676239 | orchestrator | 2026-03-13 01:19:06.676243 | orchestrator | + echo 2026-03-13 01:19:06.676247 | orchestrator | + echo '## Images @ testbed-node-1' 2026-03-13 01:19:06.676252 | orchestrator | + echo 2026-03-13 01:19:06.676256 | orchestrator | + osism container testbed-node-1 images 2026-03-13 01:19:09.091554 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-13 01:19:09.091636 | orchestrator | registry.osism.tech/osism/ceph-daemon reef c5b650628899 22 hours ago 1.27GB 2026-03-13 01:19:09.091644 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 2bb4ef6fcc06 23 hours ago 673MB 2026-03-13 01:19:09.091648 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 f826fd8c0fda 23 hours ago 328MB 2026-03-13 01:19:09.091652 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 f30cbade480a 23 hours ago 584MB 2026-03-13 01:19:09.091656 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 e90bd16cdce3 23 hours ago 1.04GB 2026-03-13 01:19:09.092098 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 80f500fd8171 23 hours ago 282MB 2026-03-13 01:19:09.092149 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 ab948246405c 23 hours ago 279MB 2026-03-13 01:19:09.092155 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 1e53dab52482 23 hours ago 272MB 2026-03-13 01:19:09.092159 | orchestrator | registry.osism.tech/kolla/cron 2024.2 41fd7166b66f 23 hours ago 271MB 2026-03-13 01:19:09.092163 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 5f9575c7de3a 23 hours ago 1.54GB 2026-03-13 01:19:09.092169 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 39e4bf85573b 23 hours ago 1.56GB 2026-03-13 01:19:09.092173 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 786a616e9f86 23 hours ago 422MB 2026-03-13 01:19:09.092177 | orchestrator | registry.osism.tech/kolla/redis 2024.2 79cbbadff70c 23 hours ago 278MB 2026-03-13 01:19:09.092181 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 7f675fae4e1f 23 hours ago 278MB 2026-03-13 01:19:09.092185 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 b39733124f89 23 hours ago 457MB 2026-03-13 01:19:09.092189 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 44ae16a78943 23 hours ago 1.15GB 2026-03-13 01:19:09.092193 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 9c6308dc8e8a 23 hours ago 284MB 2026-03-13 01:19:09.092197 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 e99035040be1 23 hours ago 284MB 2026-03-13 01:19:09.092200 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 0811a6e4bdc8 23 hours ago 304MB 2026-03-13 01:19:09.092204 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 05abc98b5a12 23 hours ago 306MB 2026-03-13 01:19:09.092208 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 542ab6787a61 23 hours ago 297MB 2026-03-13 01:19:09.092212 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 eea1c336348d 23 hours ago 363MB 2026-03-13 01:19:09.092216 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 09085dea3626 23 hours ago 311MB 2026-03-13 01:19:09.092220 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 4fca3028b700 23 hours ago 989MB 2026-03-13 01:19:09.092235 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 534376e957d8 23 hours ago 994MB 2026-03-13 01:19:09.092246 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 108470ab5c4b 23 hours ago 990MB 2026-03-13 01:19:09.092250 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 facafd96d906 23 hours ago 990MB 2026-03-13 01:19:09.092254 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 bce30dc0da1c 23 hours ago 990MB 2026-03-13 01:19:09.092257 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 e37b3fbb8361 23 hours ago 994MB 2026-03-13 01:19:09.092261 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 b2764349685c 23 hours ago 1.13GB 2026-03-13 01:19:09.092265 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 79decdcdb4d2 23 hours ago 1.25GB 2026-03-13 01:19:09.092269 | orchestrator | registry.osism.tech/kolla/cinder-volume 2024.2 1d7e6234cd5b 23 hours ago 1.72GB 2026-03-13 01:19:09.092273 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 9e5af04505af 23 hours ago 1.41GB 2026-03-13 01:19:09.092277 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 52f01491450e 23 hours ago 1.41GB 2026-03-13 01:19:09.092281 | orchestrator | registry.osism.tech/kolla/cinder-backup 2024.2 cc4f604334a9 23 hours ago 1.42GB 2026-03-13 01:19:09.092284 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 a5f5e8233564 23 hours ago 1.17GB 2026-03-13 01:19:09.092288 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 c378e867efd2 23 hours ago 996MB 2026-03-13 01:19:09.092292 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 5624aed8931e 23 hours ago 996MB 2026-03-13 01:19:09.092296 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 9db286885af5 23 hours ago 996MB 2026-03-13 01:19:09.092300 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 17ff8652a341 23 hours ago 1.05GB 2026-03-13 01:19:09.092303 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 3de0708ce8c1 23 hours ago 1.07GB 2026-03-13 01:19:09.092307 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 f82f3bd15b51 23 hours ago 1.04GB 2026-03-13 01:19:09.092311 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 e07b0e418945 23 hours ago 1.06GB 2026-03-13 01:19:09.092323 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 1d669a84dd80 23 hours ago 1.03GB 2026-03-13 01:19:09.092327 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 d2b31fc919ac 24 hours ago 1.03GB 2026-03-13 01:19:09.092331 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 b9eb1b71fda7 24 hours ago 1.06GB 2026-03-13 01:19:09.092334 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 c013a6f84cbf 24 hours ago 1.03GB 2026-03-13 01:19:09.092338 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 335605c25420 24 hours ago 1.22GB 2026-03-13 01:19:09.092342 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 1b5ddcaeca26 24 hours ago 1.22GB 2026-03-13 01:19:09.092346 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 4bff38baab68 24 hours ago 1.22GB 2026-03-13 01:19:09.092349 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 9385a76b5b24 24 hours ago 1.37GB 2026-03-13 01:19:09.092353 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 0a193069d16e 24 hours ago 981MB 2026-03-13 01:19:09.092357 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 351f367ed5b4 24 hours ago 1.1GB 2026-03-13 01:19:09.092362 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 7c8095c3db60 24 hours ago 846MB 2026-03-13 01:19:09.092376 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 f51841b0582e 24 hours ago 846MB 2026-03-13 01:19:09.092389 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 357485de29b2 24 hours ago 846MB 2026-03-13 01:19:09.092396 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 4007e5413f47 24 hours ago 846MB 2026-03-13 01:19:09.377850 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-13 01:19:09.378075 | orchestrator | ++ semver latest 5.0.0 2026-03-13 01:19:09.436255 | orchestrator | 2026-03-13 01:19:09.436324 | orchestrator | ## Containers @ testbed-node-2 2026-03-13 01:19:09.436331 | orchestrator | 2026-03-13 01:19:09.436336 | orchestrator | + [[ -1 -eq -1 ]] 2026-03-13 01:19:09.436340 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-13 01:19:09.436353 | orchestrator | + echo 2026-03-13 01:19:09.436358 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-03-13 01:19:09.436370 | orchestrator | + echo 2026-03-13 01:19:09.436374 | orchestrator | + osism container testbed-node-2 ps 2026-03-13 01:19:11.834296 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-13 01:19:11.834384 | orchestrator | 0bbe4e261246 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 6 minutes ago Up 6 minutes (healthy) octavia_worker 2026-03-13 01:19:11.834397 | orchestrator | 8205774fdd10 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 6 minutes ago Up 6 minutes (healthy) octavia_housekeeping 2026-03-13 01:19:11.834404 | orchestrator | b4b76e2b738a registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 6 minutes ago Up 6 minutes (healthy) octavia_health_manager 2026-03-13 01:19:11.834410 | orchestrator | 6c208bf7070d registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 6 minutes ago Up 6 minutes octavia_driver_agent 2026-03-13 01:19:11.834414 | orchestrator | 36e0e5294c09 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 6 minutes ago Up 6 minutes (healthy) octavia_api 2026-03-13 01:19:11.834418 | orchestrator | 7149a1b7b086 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2026-03-13 01:19:11.834422 | orchestrator | 86e8e84905d3 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_conductor 2026-03-13 01:19:11.834426 | orchestrator | 0bd0a6b3a5a3 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_api 2026-03-13 01:19:11.834430 | orchestrator | 12acd584ee57 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes grafana 2026-03-13 01:19:11.834434 | orchestrator | 0940a52c0364 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_scheduler 2026-03-13 01:19:11.834438 | orchestrator | d963ef310d30 registry.osism.tech/kolla/cinder-backup:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_backup 2026-03-13 01:19:11.834444 | orchestrator | cf58ef2f5102 registry.osism.tech/kolla/cinder-volume:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_volume 2026-03-13 01:19:11.834449 | orchestrator | 4802343dd395 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) glance_api 2026-03-13 01:19:11.834456 | orchestrator | f90745cb75cf registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2026-03-13 01:19:11.834479 | orchestrator | a2052dd306cd registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_api 2026-03-13 01:19:11.834486 | orchestrator | 6b664b5a69fb registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2026-03-13 01:19:11.834493 | orchestrator | a0df2ec5d381 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2026-03-13 01:19:11.834500 | orchestrator | ebace39fa197 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_memcached_exporter 2026-03-13 01:19:11.834506 | orchestrator | bb43d5c90011 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_mysqld_exporter 2026-03-13 01:19:11.834513 | orchestrator | e105e7c65337 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2026-03-13 01:19:11.834519 | orchestrator | 3b5a29089a13 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_conductor 2026-03-13 01:19:11.834540 | orchestrator | 8b138ea8cba0 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_api 2026-03-13 01:19:11.834547 | orchestrator | cc8415fe84c1 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) neutron_server 2026-03-13 01:19:11.834555 | orchestrator | c18b778708c9 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) placement_api 2026-03-13 01:19:11.834559 | orchestrator | 862b4d0935c3 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_worker 2026-03-13 01:19:11.834563 | orchestrator | 55f9e6d2ea79 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_mdns 2026-03-13 01:19:11.834567 | orchestrator | 57dff03a02dd registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_producer 2026-03-13 01:19:11.834571 | orchestrator | 57a48f0f4cd0 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_central 2026-03-13 01:19:11.834574 | orchestrator | f6f04d60e87b registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 17 minutes ago Up 17 minutes ceph-mgr-testbed-node-2 2026-03-13 01:19:11.834589 | orchestrator | 77df460a01e6 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_api 2026-03-13 01:19:11.834593 | orchestrator | 4ec8ee642bde registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_worker 2026-03-13 01:19:11.834597 | orchestrator | c517ee91d4a6 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_backend_bind9 2026-03-13 01:19:11.834601 | orchestrator | 085e64f79e1e registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_keystone_listener 2026-03-13 01:19:11.834608 | orchestrator | 45cd6962f439 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_api 2026-03-13 01:19:11.834676 | orchestrator | 913bca0cb983 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone 2026-03-13 01:19:11.834681 | orchestrator | 481b8b1fd40b registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone_fernet 2026-03-13 01:19:11.834685 | orchestrator | 3ca09a154b96 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) horizon 2026-03-13 01:19:11.834692 | orchestrator | 61d94120ab0a registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone_ssh 2026-03-13 01:19:11.834696 | orchestrator | ed4db49acd61 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch_dashboards 2026-03-13 01:19:11.834700 | orchestrator | 01704a3795c0 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 22 minutes ago Up 22 minutes (healthy) mariadb 2026-03-13 01:19:11.834703 | orchestrator | d3da02d5494f registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch 2026-03-13 01:19:11.834707 | orchestrator | 1d05c2d309ac registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 24 minutes ago Up 24 minutes ceph-crash-testbed-node-2 2026-03-13 01:19:11.834711 | orchestrator | ccbb1dbb14cb registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 25 minutes ago Up 24 minutes keepalived 2026-03-13 01:19:11.834715 | orchestrator | 78d627be0a35 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) proxysql 2026-03-13 01:19:11.834723 | orchestrator | 1e16d98794b6 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) haproxy 2026-03-13 01:19:11.834727 | orchestrator | f4865550e603 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 28 minutes ago Up 27 minutes ovn_northd 2026-03-13 01:19:11.834731 | orchestrator | 4456a2cd6a68 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 28 minutes ago Up 27 minutes ovn_sb_db 2026-03-13 01:19:11.834735 | orchestrator | 8664df446db4 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_nb_db 2026-03-13 01:19:11.834738 | orchestrator | 881b0e382c64 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) rabbitmq 2026-03-13 01:19:11.834742 | orchestrator | ccbf70212750 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-2 2026-03-13 01:19:11.834746 | orchestrator | e62a2a678b9f registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_controller 2026-03-13 01:19:11.834750 | orchestrator | 312903655a5f registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_vswitchd 2026-03-13 01:19:11.834754 | orchestrator | 285c2f627f64 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_db 2026-03-13 01:19:11.834762 | orchestrator | 257a59071618 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis_sentinel 2026-03-13 01:19:11.834768 | orchestrator | 857923b7b5ab registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis 2026-03-13 01:19:11.834774 | orchestrator | ac16e37048d9 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) memcached 2026-03-13 01:19:11.834781 | orchestrator | 3b843484295a registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes cron 2026-03-13 01:19:11.834789 | orchestrator | 9a0c9a40a489 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2026-03-13 01:19:11.834796 | orchestrator | a7665dabd51e registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2026-03-13 01:19:12.118474 | orchestrator | 2026-03-13 01:19:12.118565 | orchestrator | ## Images @ testbed-node-2 2026-03-13 01:19:12.118576 | orchestrator | 2026-03-13 01:19:12.118584 | orchestrator | + echo 2026-03-13 01:19:12.118591 | orchestrator | + echo '## Images @ testbed-node-2' 2026-03-13 01:19:12.118597 | orchestrator | + echo 2026-03-13 01:19:12.118604 | orchestrator | + osism container testbed-node-2 images 2026-03-13 01:19:14.475960 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-13 01:19:14.476842 | orchestrator | registry.osism.tech/osism/ceph-daemon reef c5b650628899 22 hours ago 1.27GB 2026-03-13 01:19:14.476901 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 2bb4ef6fcc06 23 hours ago 673MB 2026-03-13 01:19:14.476911 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 f826fd8c0fda 23 hours ago 328MB 2026-03-13 01:19:14.476919 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 f30cbade480a 23 hours ago 584MB 2026-03-13 01:19:14.476925 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 e90bd16cdce3 23 hours ago 1.04GB 2026-03-13 01:19:14.476932 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 80f500fd8171 23 hours ago 282MB 2026-03-13 01:19:14.476938 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 ab948246405c 23 hours ago 279MB 2026-03-13 01:19:14.476944 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 1e53dab52482 23 hours ago 272MB 2026-03-13 01:19:14.476951 | orchestrator | registry.osism.tech/kolla/cron 2024.2 41fd7166b66f 23 hours ago 271MB 2026-03-13 01:19:14.476958 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 5f9575c7de3a 23 hours ago 1.54GB 2026-03-13 01:19:14.476964 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 39e4bf85573b 23 hours ago 1.56GB 2026-03-13 01:19:14.476992 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 786a616e9f86 23 hours ago 422MB 2026-03-13 01:19:14.476998 | orchestrator | registry.osism.tech/kolla/redis 2024.2 79cbbadff70c 23 hours ago 278MB 2026-03-13 01:19:14.477005 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 7f675fae4e1f 23 hours ago 278MB 2026-03-13 01:19:14.477011 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 b39733124f89 23 hours ago 457MB 2026-03-13 01:19:14.477017 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 44ae16a78943 23 hours ago 1.15GB 2026-03-13 01:19:14.477034 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 9c6308dc8e8a 23 hours ago 284MB 2026-03-13 01:19:14.477041 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 e99035040be1 23 hours ago 284MB 2026-03-13 01:19:14.477074 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 0811a6e4bdc8 23 hours ago 304MB 2026-03-13 01:19:14.477081 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 05abc98b5a12 23 hours ago 306MB 2026-03-13 01:19:14.477088 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 542ab6787a61 23 hours ago 297MB 2026-03-13 01:19:14.477095 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 eea1c336348d 23 hours ago 363MB 2026-03-13 01:19:14.477101 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 09085dea3626 23 hours ago 311MB 2026-03-13 01:19:14.477107 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 4fca3028b700 23 hours ago 989MB 2026-03-13 01:19:14.477114 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 534376e957d8 23 hours ago 994MB 2026-03-13 01:19:14.477120 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 108470ab5c4b 23 hours ago 990MB 2026-03-13 01:19:14.477127 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 facafd96d906 23 hours ago 990MB 2026-03-13 01:19:14.477134 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 bce30dc0da1c 23 hours ago 990MB 2026-03-13 01:19:14.477141 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 e37b3fbb8361 23 hours ago 994MB 2026-03-13 01:19:14.477148 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 b2764349685c 23 hours ago 1.13GB 2026-03-13 01:19:14.477155 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 79decdcdb4d2 23 hours ago 1.25GB 2026-03-13 01:19:14.477162 | orchestrator | registry.osism.tech/kolla/cinder-volume 2024.2 1d7e6234cd5b 23 hours ago 1.72GB 2026-03-13 01:19:14.477168 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 9e5af04505af 23 hours ago 1.41GB 2026-03-13 01:19:14.477174 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 52f01491450e 23 hours ago 1.41GB 2026-03-13 01:19:14.477181 | orchestrator | registry.osism.tech/kolla/cinder-backup 2024.2 cc4f604334a9 23 hours ago 1.42GB 2026-03-13 01:19:14.477190 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 a5f5e8233564 23 hours ago 1.17GB 2026-03-13 01:19:14.477219 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 c378e867efd2 23 hours ago 996MB 2026-03-13 01:19:14.477225 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 5624aed8931e 23 hours ago 996MB 2026-03-13 01:19:14.477237 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 9db286885af5 23 hours ago 996MB 2026-03-13 01:19:14.477243 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 17ff8652a341 23 hours ago 1.05GB 2026-03-13 01:19:14.477249 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 3de0708ce8c1 23 hours ago 1.07GB 2026-03-13 01:19:14.477255 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 f82f3bd15b51 23 hours ago 1.04GB 2026-03-13 01:19:14.477262 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 e07b0e418945 24 hours ago 1.06GB 2026-03-13 01:19:14.477267 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 1d669a84dd80 24 hours ago 1.03GB 2026-03-13 01:19:14.477273 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 d2b31fc919ac 24 hours ago 1.03GB 2026-03-13 01:19:14.477279 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 b9eb1b71fda7 24 hours ago 1.06GB 2026-03-13 01:19:14.477285 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 c013a6f84cbf 24 hours ago 1.03GB 2026-03-13 01:19:14.477299 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 335605c25420 24 hours ago 1.22GB 2026-03-13 01:19:14.477306 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 1b5ddcaeca26 24 hours ago 1.22GB 2026-03-13 01:19:14.477310 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 4bff38baab68 24 hours ago 1.22GB 2026-03-13 01:19:14.477314 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 9385a76b5b24 24 hours ago 1.37GB 2026-03-13 01:19:14.477318 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 0a193069d16e 24 hours ago 981MB 2026-03-13 01:19:14.477321 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 351f367ed5b4 24 hours ago 1.1GB 2026-03-13 01:19:14.477325 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 7c8095c3db60 24 hours ago 846MB 2026-03-13 01:19:14.477329 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 f51841b0582e 24 hours ago 846MB 2026-03-13 01:19:14.477333 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 357485de29b2 24 hours ago 846MB 2026-03-13 01:19:14.477337 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 4007e5413f47 24 hours ago 846MB 2026-03-13 01:19:14.751366 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-03-13 01:19:14.758566 | orchestrator | + set -e 2026-03-13 01:19:14.758611 | orchestrator | + source /opt/manager-vars.sh 2026-03-13 01:19:14.760124 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-13 01:19:14.760164 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-13 01:19:14.760170 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-13 01:19:14.760174 | orchestrator | ++ CEPH_VERSION=reef 2026-03-13 01:19:14.760178 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-13 01:19:14.760186 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-13 01:19:14.760190 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-13 01:19:14.760194 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-13 01:19:14.760198 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-13 01:19:14.760202 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-13 01:19:14.760206 | orchestrator | ++ export ARA=false 2026-03-13 01:19:14.760210 | orchestrator | ++ ARA=false 2026-03-13 01:19:14.760214 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-13 01:19:14.760218 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-13 01:19:14.760222 | orchestrator | ++ export TEMPEST=true 2026-03-13 01:19:14.760225 | orchestrator | ++ TEMPEST=true 2026-03-13 01:19:14.760229 | orchestrator | ++ export IS_ZUUL=true 2026-03-13 01:19:14.760233 | orchestrator | ++ IS_ZUUL=true 2026-03-13 01:19:14.760237 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.64 2026-03-13 01:19:14.760241 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.64 2026-03-13 01:19:14.760244 | orchestrator | ++ export EXTERNAL_API=false 2026-03-13 01:19:14.760248 | orchestrator | ++ EXTERNAL_API=false 2026-03-13 01:19:14.760252 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-13 01:19:14.760256 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-13 01:19:14.760259 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-13 01:19:14.760263 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-13 01:19:14.760267 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-13 01:19:14.760271 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-13 01:19:14.760275 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-13 01:19:14.760278 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-03-13 01:19:14.769204 | orchestrator | + set -e 2026-03-13 01:19:14.769253 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-13 01:19:14.769259 | orchestrator | ++ export INTERACTIVE=false 2026-03-13 01:19:14.769264 | orchestrator | ++ INTERACTIVE=false 2026-03-13 01:19:14.769268 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-13 01:19:14.769272 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-13 01:19:14.769333 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-13 01:19:14.770576 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-13 01:19:14.776874 | orchestrator | 2026-03-13 01:19:14.776919 | orchestrator | # Ceph status 2026-03-13 01:19:14.776924 | orchestrator | 2026-03-13 01:19:14.776929 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-13 01:19:14.776934 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-13 01:19:14.776938 | orchestrator | + echo 2026-03-13 01:19:14.776955 | orchestrator | + echo '# Ceph status' 2026-03-13 01:19:14.776959 | orchestrator | + echo 2026-03-13 01:19:14.776963 | orchestrator | + ceph -s 2026-03-13 01:19:15.394370 | orchestrator | cluster: 2026-03-13 01:19:15.394418 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-03-13 01:19:15.394424 | orchestrator | health: HEALTH_OK 2026-03-13 01:19:15.394429 | orchestrator | 2026-03-13 01:19:15.394433 | orchestrator | services: 2026-03-13 01:19:15.394437 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 28m) 2026-03-13 01:19:15.394441 | orchestrator | mgr: testbed-node-1(active, since 17m), standbys: testbed-node-0, testbed-node-2 2026-03-13 01:19:15.394446 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-03-13 01:19:15.394450 | orchestrator | osd: 6 osds: 6 up (since 25m), 6 in (since 25m) 2026-03-13 01:19:15.394454 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-03-13 01:19:15.394458 | orchestrator | 2026-03-13 01:19:15.394461 | orchestrator | data: 2026-03-13 01:19:15.394465 | orchestrator | volumes: 1/1 healthy 2026-03-13 01:19:15.394469 | orchestrator | pools: 14 pools, 401 pgs 2026-03-13 01:19:15.394473 | orchestrator | objects: 555 objects, 2.2 GiB 2026-03-13 01:19:15.394476 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2026-03-13 01:19:15.394482 | orchestrator | pgs: 401 active+clean 2026-03-13 01:19:15.394489 | orchestrator | 2026-03-13 01:19:15.439215 | orchestrator | 2026-03-13 01:19:15.439261 | orchestrator | # Ceph versions 2026-03-13 01:19:15.439267 | orchestrator | 2026-03-13 01:19:15.439271 | orchestrator | + echo 2026-03-13 01:19:15.439275 | orchestrator | + echo '# Ceph versions' 2026-03-13 01:19:15.439282 | orchestrator | + echo 2026-03-13 01:19:15.439288 | orchestrator | + ceph versions 2026-03-13 01:19:16.021454 | orchestrator | { 2026-03-13 01:19:16.021538 | orchestrator | "mon": { 2026-03-13 01:19:16.021554 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-03-13 01:19:16.021566 | orchestrator | }, 2026-03-13 01:19:16.021578 | orchestrator | "mgr": { 2026-03-13 01:19:16.021589 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-03-13 01:19:16.021601 | orchestrator | }, 2026-03-13 01:19:16.021612 | orchestrator | "osd": { 2026-03-13 01:19:16.021657 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2026-03-13 01:19:16.021668 | orchestrator | }, 2026-03-13 01:19:16.021679 | orchestrator | "mds": { 2026-03-13 01:19:16.021689 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-03-13 01:19:16.021700 | orchestrator | }, 2026-03-13 01:19:16.021709 | orchestrator | "rgw": { 2026-03-13 01:19:16.021754 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-03-13 01:19:16.021767 | orchestrator | }, 2026-03-13 01:19:16.021778 | orchestrator | "overall": { 2026-03-13 01:19:16.021789 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2026-03-13 01:19:16.021799 | orchestrator | } 2026-03-13 01:19:16.021810 | orchestrator | } 2026-03-13 01:19:16.063811 | orchestrator | 2026-03-13 01:19:16.063865 | orchestrator | # Ceph OSD tree 2026-03-13 01:19:16.063873 | orchestrator | 2026-03-13 01:19:16.063879 | orchestrator | + echo 2026-03-13 01:19:16.063885 | orchestrator | + echo '# Ceph OSD tree' 2026-03-13 01:19:16.063892 | orchestrator | + echo 2026-03-13 01:19:16.063897 | orchestrator | + ceph osd df tree 2026-03-13 01:19:16.526411 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-03-13 01:19:16.526474 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 430 MiB 113 GiB 5.92 1.00 - root default 2026-03-13 01:19:16.526479 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-3 2026-03-13 01:19:16.526484 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.3 GiB 1 KiB 70 MiB 19 GiB 6.69 1.13 201 up osd.0 2026-03-13 01:19:16.526488 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.0 GiB 979 MiB 1 KiB 74 MiB 19 GiB 5.14 0.87 189 up osd.5 2026-03-13 01:19:16.526492 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2026-03-13 01:19:16.526496 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 74 MiB 19 GiB 6.59 1.11 192 up osd.1 2026-03-13 01:19:16.526512 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.0 GiB 1003 MiB 1 KiB 70 MiB 19 GiB 5.24 0.89 196 up osd.4 2026-03-13 01:19:16.526516 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2026-03-13 01:19:16.526520 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 74 MiB 19 GiB 6.59 1.11 192 up osd.2 2026-03-13 01:19:16.526523 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.0 GiB 1003 MiB 1 KiB 70 MiB 19 GiB 5.24 0.89 200 up osd.3 2026-03-13 01:19:16.526527 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 430 MiB 113 GiB 5.92 2026-03-13 01:19:16.526531 | orchestrator | MIN/MAX VAR: 0.87/1.13 STDDEV: 0.71 2026-03-13 01:19:16.568316 | orchestrator | 2026-03-13 01:19:16.568363 | orchestrator | # Ceph monitor status 2026-03-13 01:19:16.568369 | orchestrator | 2026-03-13 01:19:16.568373 | orchestrator | + echo 2026-03-13 01:19:16.568394 | orchestrator | + echo '# Ceph monitor status' 2026-03-13 01:19:16.568399 | orchestrator | + echo 2026-03-13 01:19:16.568403 | orchestrator | + ceph mon stat 2026-03-13 01:19:17.137811 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 8, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-03-13 01:19:17.177688 | orchestrator | 2026-03-13 01:19:17.177777 | orchestrator | # Ceph quorum status 2026-03-13 01:19:17.177786 | orchestrator | 2026-03-13 01:19:17.177791 | orchestrator | + echo 2026-03-13 01:19:17.177829 | orchestrator | + echo '# Ceph quorum status' 2026-03-13 01:19:17.177834 | orchestrator | + echo 2026-03-13 01:19:17.177881 | orchestrator | + ceph quorum_status 2026-03-13 01:19:17.178429 | orchestrator | + jq 2026-03-13 01:19:17.817953 | orchestrator | { 2026-03-13 01:19:17.818114 | orchestrator | "election_epoch": 8, 2026-03-13 01:19:17.818129 | orchestrator | "quorum": [ 2026-03-13 01:19:17.818136 | orchestrator | 0, 2026-03-13 01:19:17.818142 | orchestrator | 1, 2026-03-13 01:19:17.818149 | orchestrator | 2 2026-03-13 01:19:17.818155 | orchestrator | ], 2026-03-13 01:19:17.818162 | orchestrator | "quorum_names": [ 2026-03-13 01:19:17.818169 | orchestrator | "testbed-node-0", 2026-03-13 01:19:17.818176 | orchestrator | "testbed-node-1", 2026-03-13 01:19:17.818183 | orchestrator | "testbed-node-2" 2026-03-13 01:19:17.818187 | orchestrator | ], 2026-03-13 01:19:17.818191 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-03-13 01:19:17.818196 | orchestrator | "quorum_age": 1735, 2026-03-13 01:19:17.818200 | orchestrator | "features": { 2026-03-13 01:19:17.818204 | orchestrator | "quorum_con": "4540138322906710015", 2026-03-13 01:19:17.818208 | orchestrator | "quorum_mon": [ 2026-03-13 01:19:17.818212 | orchestrator | "kraken", 2026-03-13 01:19:17.818215 | orchestrator | "luminous", 2026-03-13 01:19:17.818220 | orchestrator | "mimic", 2026-03-13 01:19:17.818226 | orchestrator | "osdmap-prune", 2026-03-13 01:19:17.818232 | orchestrator | "nautilus", 2026-03-13 01:19:17.818237 | orchestrator | "octopus", 2026-03-13 01:19:17.818245 | orchestrator | "pacific", 2026-03-13 01:19:17.818255 | orchestrator | "elector-pinging", 2026-03-13 01:19:17.818260 | orchestrator | "quincy", 2026-03-13 01:19:17.818266 | orchestrator | "reef" 2026-03-13 01:19:17.818271 | orchestrator | ] 2026-03-13 01:19:17.818277 | orchestrator | }, 2026-03-13 01:19:17.818283 | orchestrator | "monmap": { 2026-03-13 01:19:17.818289 | orchestrator | "epoch": 1, 2026-03-13 01:19:17.818294 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-03-13 01:19:17.818301 | orchestrator | "modified": "2026-03-13T00:50:03.933180Z", 2026-03-13 01:19:17.818308 | orchestrator | "created": "2026-03-13T00:50:03.933180Z", 2026-03-13 01:19:17.818314 | orchestrator | "min_mon_release": 18, 2026-03-13 01:19:17.818320 | orchestrator | "min_mon_release_name": "reef", 2026-03-13 01:19:17.818326 | orchestrator | "election_strategy": 1, 2026-03-13 01:19:17.818332 | orchestrator | "disallowed_leaders: ": "", 2026-03-13 01:19:17.818337 | orchestrator | "stretch_mode": false, 2026-03-13 01:19:17.818343 | orchestrator | "tiebreaker_mon": "", 2026-03-13 01:19:17.818349 | orchestrator | "removed_ranks: ": "", 2026-03-13 01:19:17.818356 | orchestrator | "features": { 2026-03-13 01:19:17.818363 | orchestrator | "persistent": [ 2026-03-13 01:19:17.818401 | orchestrator | "kraken", 2026-03-13 01:19:17.818407 | orchestrator | "luminous", 2026-03-13 01:19:17.818410 | orchestrator | "mimic", 2026-03-13 01:19:17.818414 | orchestrator | "osdmap-prune", 2026-03-13 01:19:17.818418 | orchestrator | "nautilus", 2026-03-13 01:19:17.818421 | orchestrator | "octopus", 2026-03-13 01:19:17.818425 | orchestrator | "pacific", 2026-03-13 01:19:17.818428 | orchestrator | "elector-pinging", 2026-03-13 01:19:17.818432 | orchestrator | "quincy", 2026-03-13 01:19:17.818436 | orchestrator | "reef" 2026-03-13 01:19:17.818440 | orchestrator | ], 2026-03-13 01:19:17.818443 | orchestrator | "optional": [] 2026-03-13 01:19:17.818447 | orchestrator | }, 2026-03-13 01:19:17.818451 | orchestrator | "mons": [ 2026-03-13 01:19:17.818455 | orchestrator | { 2026-03-13 01:19:17.818458 | orchestrator | "rank": 0, 2026-03-13 01:19:17.818473 | orchestrator | "name": "testbed-node-0", 2026-03-13 01:19:17.818477 | orchestrator | "public_addrs": { 2026-03-13 01:19:17.818480 | orchestrator | "addrvec": [ 2026-03-13 01:19:17.818484 | orchestrator | { 2026-03-13 01:19:17.818488 | orchestrator | "type": "v2", 2026-03-13 01:19:17.818491 | orchestrator | "addr": "192.168.16.10:3300", 2026-03-13 01:19:17.818495 | orchestrator | "nonce": 0 2026-03-13 01:19:17.818499 | orchestrator | }, 2026-03-13 01:19:17.818502 | orchestrator | { 2026-03-13 01:19:17.818506 | orchestrator | "type": "v1", 2026-03-13 01:19:17.818510 | orchestrator | "addr": "192.168.16.10:6789", 2026-03-13 01:19:17.818513 | orchestrator | "nonce": 0 2026-03-13 01:19:17.818517 | orchestrator | } 2026-03-13 01:19:17.818522 | orchestrator | ] 2026-03-13 01:19:17.818526 | orchestrator | }, 2026-03-13 01:19:17.818530 | orchestrator | "addr": "192.168.16.10:6789/0", 2026-03-13 01:19:17.818534 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2026-03-13 01:19:17.818538 | orchestrator | "priority": 0, 2026-03-13 01:19:17.818542 | orchestrator | "weight": 0, 2026-03-13 01:19:17.818546 | orchestrator | "crush_location": "{}" 2026-03-13 01:19:17.818551 | orchestrator | }, 2026-03-13 01:19:17.818555 | orchestrator | { 2026-03-13 01:19:17.818559 | orchestrator | "rank": 1, 2026-03-13 01:19:17.818563 | orchestrator | "name": "testbed-node-1", 2026-03-13 01:19:17.818569 | orchestrator | "public_addrs": { 2026-03-13 01:19:17.818575 | orchestrator | "addrvec": [ 2026-03-13 01:19:17.818581 | orchestrator | { 2026-03-13 01:19:17.818587 | orchestrator | "type": "v2", 2026-03-13 01:19:17.818593 | orchestrator | "addr": "192.168.16.11:3300", 2026-03-13 01:19:17.818607 | orchestrator | "nonce": 0 2026-03-13 01:19:17.818614 | orchestrator | }, 2026-03-13 01:19:17.818693 | orchestrator | { 2026-03-13 01:19:17.818699 | orchestrator | "type": "v1", 2026-03-13 01:19:17.818705 | orchestrator | "addr": "192.168.16.11:6789", 2026-03-13 01:19:17.818709 | orchestrator | "nonce": 0 2026-03-13 01:19:17.818713 | orchestrator | } 2026-03-13 01:19:17.818718 | orchestrator | ] 2026-03-13 01:19:17.818722 | orchestrator | }, 2026-03-13 01:19:17.818726 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-03-13 01:19:17.818731 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-03-13 01:19:17.818735 | orchestrator | "priority": 0, 2026-03-13 01:19:17.818739 | orchestrator | "weight": 0, 2026-03-13 01:19:17.818744 | orchestrator | "crush_location": "{}" 2026-03-13 01:19:17.818748 | orchestrator | }, 2026-03-13 01:19:17.818752 | orchestrator | { 2026-03-13 01:19:17.818756 | orchestrator | "rank": 2, 2026-03-13 01:19:17.818761 | orchestrator | "name": "testbed-node-2", 2026-03-13 01:19:17.818765 | orchestrator | "public_addrs": { 2026-03-13 01:19:17.818769 | orchestrator | "addrvec": [ 2026-03-13 01:19:17.818773 | orchestrator | { 2026-03-13 01:19:17.818777 | orchestrator | "type": "v2", 2026-03-13 01:19:17.818782 | orchestrator | "addr": "192.168.16.12:3300", 2026-03-13 01:19:17.818786 | orchestrator | "nonce": 0 2026-03-13 01:19:17.818790 | orchestrator | }, 2026-03-13 01:19:17.818794 | orchestrator | { 2026-03-13 01:19:17.818798 | orchestrator | "type": "v1", 2026-03-13 01:19:17.818803 | orchestrator | "addr": "192.168.16.12:6789", 2026-03-13 01:19:17.818807 | orchestrator | "nonce": 0 2026-03-13 01:19:17.818811 | orchestrator | } 2026-03-13 01:19:17.818816 | orchestrator | ] 2026-03-13 01:19:17.818820 | orchestrator | }, 2026-03-13 01:19:17.818824 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-03-13 01:19:17.818829 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-03-13 01:19:17.818839 | orchestrator | "priority": 0, 2026-03-13 01:19:17.818843 | orchestrator | "weight": 0, 2026-03-13 01:19:17.818848 | orchestrator | "crush_location": "{}" 2026-03-13 01:19:17.818852 | orchestrator | } 2026-03-13 01:19:17.818857 | orchestrator | ] 2026-03-13 01:19:17.818861 | orchestrator | } 2026-03-13 01:19:17.818865 | orchestrator | } 2026-03-13 01:19:17.818984 | orchestrator | 2026-03-13 01:19:17.818990 | orchestrator | # Ceph free space status 2026-03-13 01:19:17.818994 | orchestrator | 2026-03-13 01:19:17.818998 | orchestrator | + echo 2026-03-13 01:19:17.819001 | orchestrator | + echo '# Ceph free space status' 2026-03-13 01:19:17.819005 | orchestrator | + echo 2026-03-13 01:19:17.819009 | orchestrator | + ceph df 2026-03-13 01:19:18.381886 | orchestrator | --- RAW STORAGE --- 2026-03-13 01:19:18.381975 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-03-13 01:19:18.381995 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2026-03-13 01:19:18.382004 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2026-03-13 01:19:18.382055 | orchestrator | 2026-03-13 01:19:18.382068 | orchestrator | --- POOLS --- 2026-03-13 01:19:18.382077 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-03-13 01:19:18.382088 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2026-03-13 01:19:18.382097 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-03-13 01:19:18.382107 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2026-03-13 01:19:18.382117 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-03-13 01:19:18.382127 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-03-13 01:19:18.382136 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-03-13 01:19:18.382146 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-03-13 01:19:18.382155 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-03-13 01:19:18.382166 | orchestrator | .rgw.root 9 32 3.0 KiB 7 56 KiB 0 53 GiB 2026-03-13 01:19:18.382175 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-03-13 01:19:18.382186 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2026-03-13 01:19:18.382194 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.92 35 GiB 2026-03-13 01:19:18.382204 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-03-13 01:19:18.382214 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-03-13 01:19:18.425918 | orchestrator | ++ semver latest 5.0.0 2026-03-13 01:19:18.473917 | orchestrator | + [[ -1 -eq -1 ]] 2026-03-13 01:19:18.473990 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-13 01:19:18.473997 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2026-03-13 01:19:18.474002 | orchestrator | + osism apply facts 2026-03-13 01:19:30.607043 | orchestrator | 2026-03-13 01:19:30 | INFO  | Prepare task for execution of facts. 2026-03-13 01:19:30.667344 | orchestrator | 2026-03-13 01:19:30 | INFO  | Task efc2935b-5fbb-4fad-a3a7-fb58f40c0ee3 (facts) was prepared for execution. 2026-03-13 01:19:30.667440 | orchestrator | 2026-03-13 01:19:30 | INFO  | It takes a moment until task efc2935b-5fbb-4fad-a3a7-fb58f40c0ee3 (facts) has been started and output is visible here. 2026-03-13 01:19:43.995538 | orchestrator | 2026-03-13 01:19:43.995613 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-13 01:19:43.995623 | orchestrator | 2026-03-13 01:19:43.995630 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-13 01:19:43.995668 | orchestrator | Friday 13 March 2026 01:19:35 +0000 (0:00:00.265) 0:00:00.265 ********** 2026-03-13 01:19:43.995676 | orchestrator | ok: [testbed-manager] 2026-03-13 01:19:43.995683 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:19:43.995690 | orchestrator | ok: [testbed-node-1] 2026-03-13 01:19:43.995696 | orchestrator | ok: [testbed-node-2] 2026-03-13 01:19:43.995702 | orchestrator | ok: [testbed-node-3] 2026-03-13 01:19:43.995726 | orchestrator | ok: [testbed-node-4] 2026-03-13 01:19:43.995732 | orchestrator | ok: [testbed-node-5] 2026-03-13 01:19:43.995739 | orchestrator | 2026-03-13 01:19:43.995745 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-13 01:19:43.995752 | orchestrator | Friday 13 March 2026 01:19:36 +0000 (0:00:01.448) 0:00:01.714 ********** 2026-03-13 01:19:43.995758 | orchestrator | skipping: [testbed-manager] 2026-03-13 01:19:43.995765 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:19:43.995771 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:19:43.995777 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:19:43.995783 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:19:43.995789 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:19:43.995795 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:19:43.995801 | orchestrator | 2026-03-13 01:19:43.995808 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-13 01:19:43.995814 | orchestrator | 2026-03-13 01:19:43.995820 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-13 01:19:43.995826 | orchestrator | Friday 13 March 2026 01:19:37 +0000 (0:00:01.314) 0:00:03.028 ********** 2026-03-13 01:19:43.995832 | orchestrator | ok: [testbed-node-1] 2026-03-13 01:19:43.995848 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:19:43.995855 | orchestrator | ok: [testbed-node-2] 2026-03-13 01:19:43.995861 | orchestrator | ok: [testbed-manager] 2026-03-13 01:19:43.995867 | orchestrator | ok: [testbed-node-4] 2026-03-13 01:19:43.995873 | orchestrator | ok: [testbed-node-5] 2026-03-13 01:19:43.995879 | orchestrator | ok: [testbed-node-3] 2026-03-13 01:19:43.995886 | orchestrator | 2026-03-13 01:19:43.995892 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-13 01:19:43.995898 | orchestrator | 2026-03-13 01:19:43.995904 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-13 01:19:43.995910 | orchestrator | Friday 13 March 2026 01:19:43 +0000 (0:00:05.298) 0:00:08.327 ********** 2026-03-13 01:19:43.995916 | orchestrator | skipping: [testbed-manager] 2026-03-13 01:19:43.995922 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:19:43.995929 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:19:43.995935 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:19:43.995941 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:19:43.995962 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:19:43.995969 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:19:43.995975 | orchestrator | 2026-03-13 01:19:43.995982 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 01:19:43.995988 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-13 01:19:43.995996 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-13 01:19:43.996002 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-13 01:19:43.996008 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-13 01:19:43.996014 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-13 01:19:43.996020 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-13 01:19:43.996026 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-13 01:19:43.996032 | orchestrator | 2026-03-13 01:19:43.996038 | orchestrator | 2026-03-13 01:19:43.996044 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 01:19:43.996055 | orchestrator | Friday 13 March 2026 01:19:43 +0000 (0:00:00.552) 0:00:08.880 ********** 2026-03-13 01:19:43.996062 | orchestrator | =============================================================================== 2026-03-13 01:19:43.996068 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.30s 2026-03-13 01:19:43.996081 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.45s 2026-03-13 01:19:43.996088 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.31s 2026-03-13 01:19:43.996100 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.55s 2026-03-13 01:19:44.311934 | orchestrator | + osism validate ceph-mons 2026-03-13 01:20:05.551679 | orchestrator | 2026-03-13 01:20:05.551734 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-03-13 01:20:05.551741 | orchestrator | 2026-03-13 01:20:05.551745 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-03-13 01:20:05.551750 | orchestrator | Friday 13 March 2026 01:19:51 +0000 (0:00:00.485) 0:00:00.485 ********** 2026-03-13 01:20:05.551754 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-13 01:20:05.551759 | orchestrator | 2026-03-13 01:20:05.551763 | orchestrator | TASK [Create report output directory] ****************************************** 2026-03-13 01:20:05.551766 | orchestrator | Friday 13 March 2026 01:19:51 +0000 (0:00:00.825) 0:00:01.310 ********** 2026-03-13 01:20:05.551770 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-13 01:20:05.551774 | orchestrator | 2026-03-13 01:20:05.551778 | orchestrator | TASK [Define report vars] ****************************************************** 2026-03-13 01:20:05.551782 | orchestrator | Friday 13 March 2026 01:19:52 +0000 (0:00:00.920) 0:00:02.231 ********** 2026-03-13 01:20:05.551785 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:20:05.551790 | orchestrator | 2026-03-13 01:20:05.551793 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-03-13 01:20:05.551797 | orchestrator | Friday 13 March 2026 01:19:52 +0000 (0:00:00.130) 0:00:02.361 ********** 2026-03-13 01:20:05.551805 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:20:05.551809 | orchestrator | ok: [testbed-node-1] 2026-03-13 01:20:05.551813 | orchestrator | ok: [testbed-node-2] 2026-03-13 01:20:05.551817 | orchestrator | 2026-03-13 01:20:05.551820 | orchestrator | TASK [Get container info] ****************************************************** 2026-03-13 01:20:05.551824 | orchestrator | Friday 13 March 2026 01:19:53 +0000 (0:00:00.309) 0:00:02.671 ********** 2026-03-13 01:20:05.551828 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:20:05.551832 | orchestrator | ok: [testbed-node-2] 2026-03-13 01:20:05.551835 | orchestrator | ok: [testbed-node-1] 2026-03-13 01:20:05.551839 | orchestrator | 2026-03-13 01:20:05.551843 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-03-13 01:20:05.551846 | orchestrator | Friday 13 March 2026 01:19:54 +0000 (0:00:01.192) 0:00:03.863 ********** 2026-03-13 01:20:05.551850 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:20:05.551854 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:20:05.551858 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:20:05.551862 | orchestrator | 2026-03-13 01:20:05.551866 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-03-13 01:20:05.551870 | orchestrator | Friday 13 March 2026 01:19:54 +0000 (0:00:00.292) 0:00:04.156 ********** 2026-03-13 01:20:05.551873 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:20:05.551877 | orchestrator | ok: [testbed-node-1] 2026-03-13 01:20:05.551881 | orchestrator | ok: [testbed-node-2] 2026-03-13 01:20:05.551884 | orchestrator | 2026-03-13 01:20:05.551888 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-13 01:20:05.551892 | orchestrator | Friday 13 March 2026 01:19:55 +0000 (0:00:00.474) 0:00:04.631 ********** 2026-03-13 01:20:05.551895 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:20:05.551899 | orchestrator | ok: [testbed-node-1] 2026-03-13 01:20:05.551903 | orchestrator | ok: [testbed-node-2] 2026-03-13 01:20:05.551906 | orchestrator | 2026-03-13 01:20:05.551910 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-03-13 01:20:05.551929 | orchestrator | Friday 13 March 2026 01:19:55 +0000 (0:00:00.342) 0:00:04.974 ********** 2026-03-13 01:20:05.551933 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:20:05.551938 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:20:05.551944 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:20:05.551950 | orchestrator | 2026-03-13 01:20:05.551956 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-03-13 01:20:05.551964 | orchestrator | Friday 13 March 2026 01:19:55 +0000 (0:00:00.296) 0:00:05.270 ********** 2026-03-13 01:20:05.551973 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:20:05.551979 | orchestrator | ok: [testbed-node-1] 2026-03-13 01:20:05.551985 | orchestrator | ok: [testbed-node-2] 2026-03-13 01:20:05.551991 | orchestrator | 2026-03-13 01:20:05.551996 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-13 01:20:05.552002 | orchestrator | Friday 13 March 2026 01:19:56 +0000 (0:00:00.460) 0:00:05.730 ********** 2026-03-13 01:20:05.552008 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:20:05.552014 | orchestrator | 2026-03-13 01:20:05.552020 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-13 01:20:05.552025 | orchestrator | Friday 13 March 2026 01:19:56 +0000 (0:00:00.242) 0:00:05.972 ********** 2026-03-13 01:20:05.552030 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:20:05.552036 | orchestrator | 2026-03-13 01:20:05.552041 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-13 01:20:05.552047 | orchestrator | Friday 13 March 2026 01:19:56 +0000 (0:00:00.270) 0:00:06.243 ********** 2026-03-13 01:20:05.552053 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:20:05.552059 | orchestrator | 2026-03-13 01:20:05.552064 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-13 01:20:05.552070 | orchestrator | Friday 13 March 2026 01:19:57 +0000 (0:00:00.241) 0:00:06.485 ********** 2026-03-13 01:20:05.552076 | orchestrator | 2026-03-13 01:20:05.552082 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-13 01:20:05.552093 | orchestrator | Friday 13 March 2026 01:19:57 +0000 (0:00:00.068) 0:00:06.553 ********** 2026-03-13 01:20:05.552100 | orchestrator | 2026-03-13 01:20:05.552106 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-13 01:20:05.552112 | orchestrator | Friday 13 March 2026 01:19:57 +0000 (0:00:00.083) 0:00:06.636 ********** 2026-03-13 01:20:05.552118 | orchestrator | 2026-03-13 01:20:05.552125 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-13 01:20:05.552132 | orchestrator | Friday 13 March 2026 01:19:57 +0000 (0:00:00.071) 0:00:06.708 ********** 2026-03-13 01:20:05.552138 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:20:05.552153 | orchestrator | 2026-03-13 01:20:05.552165 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-03-13 01:20:05.552171 | orchestrator | Friday 13 March 2026 01:19:57 +0000 (0:00:00.251) 0:00:06.960 ********** 2026-03-13 01:20:05.552177 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:20:05.552184 | orchestrator | 2026-03-13 01:20:05.552202 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-03-13 01:20:05.552209 | orchestrator | Friday 13 March 2026 01:19:57 +0000 (0:00:00.247) 0:00:07.207 ********** 2026-03-13 01:20:05.552215 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:20:05.552219 | orchestrator | 2026-03-13 01:20:05.552223 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-03-13 01:20:05.552226 | orchestrator | Friday 13 March 2026 01:19:57 +0000 (0:00:00.117) 0:00:07.325 ********** 2026-03-13 01:20:05.552230 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:20:05.552235 | orchestrator | 2026-03-13 01:20:05.552241 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-03-13 01:20:05.552248 | orchestrator | Friday 13 March 2026 01:19:59 +0000 (0:00:01.430) 0:00:08.755 ********** 2026-03-13 01:20:05.552257 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:20:05.552325 | orchestrator | 2026-03-13 01:20:05.552335 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-03-13 01:20:05.552339 | orchestrator | Friday 13 March 2026 01:19:59 +0000 (0:00:00.474) 0:00:09.230 ********** 2026-03-13 01:20:05.552342 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:20:05.552348 | orchestrator | 2026-03-13 01:20:05.552355 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-03-13 01:20:05.552365 | orchestrator | Friday 13 March 2026 01:19:59 +0000 (0:00:00.132) 0:00:09.362 ********** 2026-03-13 01:20:05.552371 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:20:05.552376 | orchestrator | 2026-03-13 01:20:05.552382 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-03-13 01:20:05.552389 | orchestrator | Friday 13 March 2026 01:20:00 +0000 (0:00:00.328) 0:00:09.691 ********** 2026-03-13 01:20:05.552395 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:20:05.552402 | orchestrator | 2026-03-13 01:20:05.552415 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-03-13 01:20:05.552421 | orchestrator | Friday 13 March 2026 01:20:00 +0000 (0:00:00.341) 0:00:10.032 ********** 2026-03-13 01:20:05.552427 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:20:05.552433 | orchestrator | 2026-03-13 01:20:05.552440 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-03-13 01:20:05.552446 | orchestrator | Friday 13 March 2026 01:20:00 +0000 (0:00:00.112) 0:00:10.145 ********** 2026-03-13 01:20:05.552453 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:20:05.552457 | orchestrator | 2026-03-13 01:20:05.552461 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-03-13 01:20:05.552464 | orchestrator | Friday 13 March 2026 01:20:00 +0000 (0:00:00.112) 0:00:10.258 ********** 2026-03-13 01:20:05.552468 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:20:05.552472 | orchestrator | 2026-03-13 01:20:05.552475 | orchestrator | TASK [Gather status data] ****************************************************** 2026-03-13 01:20:05.552479 | orchestrator | Friday 13 March 2026 01:20:00 +0000 (0:00:00.093) 0:00:10.352 ********** 2026-03-13 01:20:05.552483 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:20:05.552486 | orchestrator | 2026-03-13 01:20:05.552490 | orchestrator | TASK [Set health test data] **************************************************** 2026-03-13 01:20:05.552494 | orchestrator | Friday 13 March 2026 01:20:02 +0000 (0:00:01.218) 0:00:11.570 ********** 2026-03-13 01:20:05.552497 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:20:05.552501 | orchestrator | 2026-03-13 01:20:05.552505 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-03-13 01:20:05.552510 | orchestrator | Friday 13 March 2026 01:20:02 +0000 (0:00:00.260) 0:00:11.830 ********** 2026-03-13 01:20:05.552516 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:20:05.552524 | orchestrator | 2026-03-13 01:20:05.552532 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-03-13 01:20:05.552537 | orchestrator | Friday 13 March 2026 01:20:02 +0000 (0:00:00.126) 0:00:11.956 ********** 2026-03-13 01:20:05.552543 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:20:05.552549 | orchestrator | 2026-03-13 01:20:05.552555 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-03-13 01:20:05.552560 | orchestrator | Friday 13 March 2026 01:20:02 +0000 (0:00:00.119) 0:00:12.076 ********** 2026-03-13 01:20:05.552565 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:20:05.552571 | orchestrator | 2026-03-13 01:20:05.552577 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-03-13 01:20:05.552582 | orchestrator | Friday 13 March 2026 01:20:02 +0000 (0:00:00.252) 0:00:12.329 ********** 2026-03-13 01:20:05.552588 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:20:05.552599 | orchestrator | 2026-03-13 01:20:05.552613 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-03-13 01:20:05.552619 | orchestrator | Friday 13 March 2026 01:20:03 +0000 (0:00:00.121) 0:00:12.450 ********** 2026-03-13 01:20:05.552625 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-13 01:20:05.552640 | orchestrator | 2026-03-13 01:20:05.552646 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-03-13 01:20:05.552715 | orchestrator | Friday 13 March 2026 01:20:03 +0000 (0:00:00.237) 0:00:12.687 ********** 2026-03-13 01:20:05.552733 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:20:05.552739 | orchestrator | 2026-03-13 01:20:05.552743 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-13 01:20:05.552747 | orchestrator | Friday 13 March 2026 01:20:03 +0000 (0:00:00.220) 0:00:12.907 ********** 2026-03-13 01:20:05.552751 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-13 01:20:05.552755 | orchestrator | 2026-03-13 01:20:05.552759 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-13 01:20:05.552763 | orchestrator | Friday 13 March 2026 01:20:05 +0000 (0:00:01.554) 0:00:14.462 ********** 2026-03-13 01:20:05.552766 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-13 01:20:05.552770 | orchestrator | 2026-03-13 01:20:05.552774 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-13 01:20:05.552778 | orchestrator | Friday 13 March 2026 01:20:05 +0000 (0:00:00.268) 0:00:14.730 ********** 2026-03-13 01:20:05.552781 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-13 01:20:05.552785 | orchestrator | 2026-03-13 01:20:05.552797 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-13 01:20:07.955612 | orchestrator | Friday 13 March 2026 01:20:05 +0000 (0:00:00.241) 0:00:14.972 ********** 2026-03-13 01:20:07.955745 | orchestrator | 2026-03-13 01:20:07.955764 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-13 01:20:07.955785 | orchestrator | Friday 13 March 2026 01:20:05 +0000 (0:00:00.065) 0:00:15.038 ********** 2026-03-13 01:20:07.955798 | orchestrator | 2026-03-13 01:20:07.955811 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-13 01:20:07.955823 | orchestrator | Friday 13 March 2026 01:20:05 +0000 (0:00:00.064) 0:00:15.102 ********** 2026-03-13 01:20:07.955834 | orchestrator | 2026-03-13 01:20:07.955846 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-03-13 01:20:07.955859 | orchestrator | Friday 13 March 2026 01:20:05 +0000 (0:00:00.067) 0:00:15.170 ********** 2026-03-13 01:20:07.955873 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-13 01:20:07.955885 | orchestrator | 2026-03-13 01:20:07.955897 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-13 01:20:07.955919 | orchestrator | Friday 13 March 2026 01:20:07 +0000 (0:00:01.303) 0:00:16.474 ********** 2026-03-13 01:20:07.955932 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-03-13 01:20:07.955957 | orchestrator |  "msg": [ 2026-03-13 01:20:07.955971 | orchestrator |  "Validator run completed.", 2026-03-13 01:20:07.955983 | orchestrator |  "You can find the report file here:", 2026-03-13 01:20:07.955996 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-03-13T01:19:51+00:00-report.json", 2026-03-13 01:20:07.956010 | orchestrator |  "on the following host:", 2026-03-13 01:20:07.956024 | orchestrator |  "testbed-manager" 2026-03-13 01:20:07.956037 | orchestrator |  ] 2026-03-13 01:20:07.956051 | orchestrator | } 2026-03-13 01:20:07.956066 | orchestrator | 2026-03-13 01:20:07.956080 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 01:20:07.956092 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-03-13 01:20:07.956100 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-13 01:20:07.956108 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-13 01:20:07.956147 | orchestrator | 2026-03-13 01:20:07.956154 | orchestrator | 2026-03-13 01:20:07.956162 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 01:20:07.956169 | orchestrator | Friday 13 March 2026 01:20:07 +0000 (0:00:00.640) 0:00:17.114 ********** 2026-03-13 01:20:07.956176 | orchestrator | =============================================================================== 2026-03-13 01:20:07.956183 | orchestrator | Aggregate test results step one ----------------------------------------- 1.55s 2026-03-13 01:20:07.956192 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.43s 2026-03-13 01:20:07.956201 | orchestrator | Write report file ------------------------------------------------------- 1.30s 2026-03-13 01:20:07.956209 | orchestrator | Gather status data ------------------------------------------------------ 1.22s 2026-03-13 01:20:07.956217 | orchestrator | Get container info ------------------------------------------------------ 1.19s 2026-03-13 01:20:07.956226 | orchestrator | Create report output directory ------------------------------------------ 0.92s 2026-03-13 01:20:07.956234 | orchestrator | Get timestamp for report file ------------------------------------------- 0.83s 2026-03-13 01:20:07.956242 | orchestrator | Print report file information ------------------------------------------- 0.64s 2026-03-13 01:20:07.956255 | orchestrator | Set test result to passed if container is existing ---------------------- 0.47s 2026-03-13 01:20:07.956274 | orchestrator | Set quorum test data ---------------------------------------------------- 0.47s 2026-03-13 01:20:07.956289 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.46s 2026-03-13 01:20:07.956302 | orchestrator | Prepare test data ------------------------------------------------------- 0.34s 2026-03-13 01:20:07.956315 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.34s 2026-03-13 01:20:07.956330 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.33s 2026-03-13 01:20:07.956344 | orchestrator | Prepare test data for container existance test -------------------------- 0.31s 2026-03-13 01:20:07.956356 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.30s 2026-03-13 01:20:07.956365 | orchestrator | Set test result to failed if container is missing ----------------------- 0.29s 2026-03-13 01:20:07.956374 | orchestrator | Aggregate test results step two ----------------------------------------- 0.27s 2026-03-13 01:20:07.956382 | orchestrator | Aggregate test results step two ----------------------------------------- 0.27s 2026-03-13 01:20:07.956392 | orchestrator | Set health test data ---------------------------------------------------- 0.26s 2026-03-13 01:20:08.275067 | orchestrator | + osism validate ceph-mgrs 2026-03-13 01:20:39.450417 | orchestrator | 2026-03-13 01:20:39.450483 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-03-13 01:20:39.450493 | orchestrator | 2026-03-13 01:20:39.450500 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-03-13 01:20:39.450507 | orchestrator | Friday 13 March 2026 01:20:25 +0000 (0:00:00.478) 0:00:00.478 ********** 2026-03-13 01:20:39.450514 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-13 01:20:39.450520 | orchestrator | 2026-03-13 01:20:39.450526 | orchestrator | TASK [Create report output directory] ****************************************** 2026-03-13 01:20:39.450533 | orchestrator | Friday 13 March 2026 01:20:25 +0000 (0:00:00.857) 0:00:01.336 ********** 2026-03-13 01:20:39.450539 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-13 01:20:39.450546 | orchestrator | 2026-03-13 01:20:39.450553 | orchestrator | TASK [Define report vars] ****************************************************** 2026-03-13 01:20:39.450559 | orchestrator | Friday 13 March 2026 01:20:26 +0000 (0:00:00.945) 0:00:02.282 ********** 2026-03-13 01:20:39.450565 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:20:39.450572 | orchestrator | 2026-03-13 01:20:39.450578 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-03-13 01:20:39.450585 | orchestrator | Friday 13 March 2026 01:20:26 +0000 (0:00:00.162) 0:00:02.444 ********** 2026-03-13 01:20:39.450591 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:20:39.450597 | orchestrator | ok: [testbed-node-1] 2026-03-13 01:20:39.450617 | orchestrator | ok: [testbed-node-2] 2026-03-13 01:20:39.450624 | orchestrator | 2026-03-13 01:20:39.450630 | orchestrator | TASK [Get container info] ****************************************************** 2026-03-13 01:20:39.450636 | orchestrator | Friday 13 March 2026 01:20:27 +0000 (0:00:00.301) 0:00:02.746 ********** 2026-03-13 01:20:39.450642 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:20:39.450648 | orchestrator | ok: [testbed-node-1] 2026-03-13 01:20:39.450654 | orchestrator | ok: [testbed-node-2] 2026-03-13 01:20:39.450660 | orchestrator | 2026-03-13 01:20:39.450667 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-03-13 01:20:39.450744 | orchestrator | Friday 13 March 2026 01:20:28 +0000 (0:00:01.179) 0:00:03.925 ********** 2026-03-13 01:20:39.450753 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:20:39.450759 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:20:39.450765 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:20:39.450771 | orchestrator | 2026-03-13 01:20:39.450777 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-03-13 01:20:39.450784 | orchestrator | Friday 13 March 2026 01:20:28 +0000 (0:00:00.273) 0:00:04.199 ********** 2026-03-13 01:20:39.450793 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:20:39.450803 | orchestrator | ok: [testbed-node-1] 2026-03-13 01:20:39.450817 | orchestrator | ok: [testbed-node-2] 2026-03-13 01:20:39.450831 | orchestrator | 2026-03-13 01:20:39.450841 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-13 01:20:39.450851 | orchestrator | Friday 13 March 2026 01:20:29 +0000 (0:00:00.468) 0:00:04.667 ********** 2026-03-13 01:20:39.450861 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:20:39.450870 | orchestrator | ok: [testbed-node-1] 2026-03-13 01:20:39.450878 | orchestrator | ok: [testbed-node-2] 2026-03-13 01:20:39.450887 | orchestrator | 2026-03-13 01:20:39.450896 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-03-13 01:20:39.450906 | orchestrator | Friday 13 March 2026 01:20:29 +0000 (0:00:00.292) 0:00:04.960 ********** 2026-03-13 01:20:39.450916 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:20:39.450925 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:20:39.450933 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:20:39.450943 | orchestrator | 2026-03-13 01:20:39.450952 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-03-13 01:20:39.450962 | orchestrator | Friday 13 March 2026 01:20:29 +0000 (0:00:00.370) 0:00:05.330 ********** 2026-03-13 01:20:39.450972 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:20:39.450981 | orchestrator | ok: [testbed-node-1] 2026-03-13 01:20:39.450991 | orchestrator | ok: [testbed-node-2] 2026-03-13 01:20:39.451002 | orchestrator | 2026-03-13 01:20:39.451013 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-13 01:20:39.451024 | orchestrator | Friday 13 March 2026 01:20:30 +0000 (0:00:00.484) 0:00:05.815 ********** 2026-03-13 01:20:39.451034 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:20:39.451044 | orchestrator | 2026-03-13 01:20:39.451055 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-13 01:20:39.451067 | orchestrator | Friday 13 March 2026 01:20:30 +0000 (0:00:00.277) 0:00:06.093 ********** 2026-03-13 01:20:39.451078 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:20:39.451105 | orchestrator | 2026-03-13 01:20:39.451118 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-13 01:20:39.451130 | orchestrator | Friday 13 March 2026 01:20:30 +0000 (0:00:00.255) 0:00:06.348 ********** 2026-03-13 01:20:39.451140 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:20:39.451150 | orchestrator | 2026-03-13 01:20:39.451162 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-13 01:20:39.451173 | orchestrator | Friday 13 March 2026 01:20:31 +0000 (0:00:00.261) 0:00:06.609 ********** 2026-03-13 01:20:39.451196 | orchestrator | 2026-03-13 01:20:39.451203 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-13 01:20:39.451210 | orchestrator | Friday 13 March 2026 01:20:31 +0000 (0:00:00.077) 0:00:06.687 ********** 2026-03-13 01:20:39.451227 | orchestrator | 2026-03-13 01:20:39.451243 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-13 01:20:39.451257 | orchestrator | Friday 13 March 2026 01:20:31 +0000 (0:00:00.069) 0:00:06.757 ********** 2026-03-13 01:20:39.451267 | orchestrator | 2026-03-13 01:20:39.451281 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-13 01:20:39.451296 | orchestrator | Friday 13 March 2026 01:20:31 +0000 (0:00:00.072) 0:00:06.829 ********** 2026-03-13 01:20:39.451307 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:20:39.451317 | orchestrator | 2026-03-13 01:20:39.451328 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-03-13 01:20:39.451339 | orchestrator | Friday 13 March 2026 01:20:31 +0000 (0:00:00.262) 0:00:07.092 ********** 2026-03-13 01:20:39.451349 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:20:39.451359 | orchestrator | 2026-03-13 01:20:39.451390 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-03-13 01:20:39.451403 | orchestrator | Friday 13 March 2026 01:20:31 +0000 (0:00:00.237) 0:00:07.329 ********** 2026-03-13 01:20:39.451413 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:20:39.451424 | orchestrator | 2026-03-13 01:20:39.451431 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-03-13 01:20:39.451437 | orchestrator | Friday 13 March 2026 01:20:31 +0000 (0:00:00.114) 0:00:07.444 ********** 2026-03-13 01:20:39.451443 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:20:39.451449 | orchestrator | 2026-03-13 01:20:39.451455 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-03-13 01:20:39.451462 | orchestrator | Friday 13 March 2026 01:20:33 +0000 (0:00:01.995) 0:00:09.439 ********** 2026-03-13 01:20:39.451468 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:20:39.451474 | orchestrator | 2026-03-13 01:20:39.451480 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-03-13 01:20:39.451486 | orchestrator | Friday 13 March 2026 01:20:34 +0000 (0:00:00.412) 0:00:09.852 ********** 2026-03-13 01:20:39.451492 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:20:39.451498 | orchestrator | 2026-03-13 01:20:39.451504 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-03-13 01:20:39.451510 | orchestrator | Friday 13 March 2026 01:20:34 +0000 (0:00:00.311) 0:00:10.163 ********** 2026-03-13 01:20:39.451516 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:20:39.451522 | orchestrator | 2026-03-13 01:20:39.451528 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-03-13 01:20:39.451535 | orchestrator | Friday 13 March 2026 01:20:34 +0000 (0:00:00.143) 0:00:10.306 ********** 2026-03-13 01:20:39.451541 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:20:39.451547 | orchestrator | 2026-03-13 01:20:39.451553 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-03-13 01:20:39.451559 | orchestrator | Friday 13 March 2026 01:20:34 +0000 (0:00:00.138) 0:00:10.445 ********** 2026-03-13 01:20:39.451565 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-13 01:20:39.451571 | orchestrator | 2026-03-13 01:20:39.451577 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-03-13 01:20:39.451583 | orchestrator | Friday 13 March 2026 01:20:35 +0000 (0:00:00.286) 0:00:10.732 ********** 2026-03-13 01:20:39.451589 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:20:39.451595 | orchestrator | 2026-03-13 01:20:39.451601 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-13 01:20:39.451607 | orchestrator | Friday 13 March 2026 01:20:35 +0000 (0:00:00.279) 0:00:11.011 ********** 2026-03-13 01:20:39.451624 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-13 01:20:39.451630 | orchestrator | 2026-03-13 01:20:39.451636 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-13 01:20:39.451642 | orchestrator | Friday 13 March 2026 01:20:36 +0000 (0:00:01.203) 0:00:12.215 ********** 2026-03-13 01:20:39.451654 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-13 01:20:39.451660 | orchestrator | 2026-03-13 01:20:39.451666 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-13 01:20:39.451693 | orchestrator | Friday 13 March 2026 01:20:37 +0000 (0:00:00.247) 0:00:12.463 ********** 2026-03-13 01:20:39.451705 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-13 01:20:39.451714 | orchestrator | 2026-03-13 01:20:39.451721 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-13 01:20:39.451727 | orchestrator | Friday 13 March 2026 01:20:37 +0000 (0:00:00.247) 0:00:12.711 ********** 2026-03-13 01:20:39.451733 | orchestrator | 2026-03-13 01:20:39.451739 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-13 01:20:39.451745 | orchestrator | Friday 13 March 2026 01:20:37 +0000 (0:00:00.070) 0:00:12.781 ********** 2026-03-13 01:20:39.451751 | orchestrator | 2026-03-13 01:20:39.451757 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-13 01:20:39.451763 | orchestrator | Friday 13 March 2026 01:20:37 +0000 (0:00:00.070) 0:00:12.852 ********** 2026-03-13 01:20:39.451769 | orchestrator | 2026-03-13 01:20:39.451775 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-03-13 01:20:39.451781 | orchestrator | Friday 13 March 2026 01:20:37 +0000 (0:00:00.245) 0:00:13.097 ********** 2026-03-13 01:20:39.451787 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-13 01:20:39.451793 | orchestrator | 2026-03-13 01:20:39.451799 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-13 01:20:39.451806 | orchestrator | Friday 13 March 2026 01:20:39 +0000 (0:00:01.383) 0:00:14.480 ********** 2026-03-13 01:20:39.451817 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-03-13 01:20:39.451834 | orchestrator |  "msg": [ 2026-03-13 01:20:39.451845 | orchestrator |  "Validator run completed.", 2026-03-13 01:20:39.451855 | orchestrator |  "You can find the report file here:", 2026-03-13 01:20:39.451865 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-03-13T01:20:25+00:00-report.json", 2026-03-13 01:20:39.451875 | orchestrator |  "on the following host:", 2026-03-13 01:20:39.451884 | orchestrator |  "testbed-manager" 2026-03-13 01:20:39.451893 | orchestrator |  ] 2026-03-13 01:20:39.451903 | orchestrator | } 2026-03-13 01:20:39.451912 | orchestrator | 2026-03-13 01:20:39.451921 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 01:20:39.451932 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-13 01:20:39.451944 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-13 01:20:39.451963 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-13 01:20:39.740267 | orchestrator | 2026-03-13 01:20:39.740317 | orchestrator | 2026-03-13 01:20:39.740324 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 01:20:39.740330 | orchestrator | Friday 13 March 2026 01:20:39 +0000 (0:00:00.408) 0:00:14.889 ********** 2026-03-13 01:20:39.740335 | orchestrator | =============================================================================== 2026-03-13 01:20:39.740339 | orchestrator | Gather list of mgr modules ---------------------------------------------- 2.00s 2026-03-13 01:20:39.740344 | orchestrator | Write report file ------------------------------------------------------- 1.38s 2026-03-13 01:20:39.740349 | orchestrator | Aggregate test results step one ----------------------------------------- 1.20s 2026-03-13 01:20:39.740353 | orchestrator | Get container info ------------------------------------------------------ 1.18s 2026-03-13 01:20:39.740358 | orchestrator | Create report output directory ------------------------------------------ 0.95s 2026-03-13 01:20:39.740374 | orchestrator | Get timestamp for report file ------------------------------------------- 0.86s 2026-03-13 01:20:39.740379 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.48s 2026-03-13 01:20:39.740384 | orchestrator | Set test result to passed if container is existing ---------------------- 0.47s 2026-03-13 01:20:39.740388 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.41s 2026-03-13 01:20:39.740393 | orchestrator | Print report file information ------------------------------------------- 0.41s 2026-03-13 01:20:39.740397 | orchestrator | Flush handlers ---------------------------------------------------------- 0.39s 2026-03-13 01:20:39.740402 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.37s 2026-03-13 01:20:39.740406 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.31s 2026-03-13 01:20:39.740419 | orchestrator | Prepare test data for container existance test -------------------------- 0.30s 2026-03-13 01:20:39.740424 | orchestrator | Prepare test data ------------------------------------------------------- 0.29s 2026-03-13 01:20:39.740429 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.29s 2026-03-13 01:20:39.740433 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.28s 2026-03-13 01:20:39.740438 | orchestrator | Aggregate test results step one ----------------------------------------- 0.28s 2026-03-13 01:20:39.740442 | orchestrator | Set test result to failed if container is missing ----------------------- 0.27s 2026-03-13 01:20:39.740447 | orchestrator | Print report file information ------------------------------------------- 0.26s 2026-03-13 01:20:40.041410 | orchestrator | + osism validate ceph-osds 2026-03-13 01:21:01.179282 | orchestrator | 2026-03-13 01:21:01.179403 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-03-13 01:21:01.179416 | orchestrator | 2026-03-13 01:21:01.179425 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-03-13 01:21:01.179433 | orchestrator | Friday 13 March 2026 01:20:56 +0000 (0:00:00.434) 0:00:00.434 ********** 2026-03-13 01:21:01.179442 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-13 01:21:01.179449 | orchestrator | 2026-03-13 01:21:01.179457 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-13 01:21:01.179500 | orchestrator | Friday 13 March 2026 01:20:57 +0000 (0:00:00.808) 0:00:01.242 ********** 2026-03-13 01:21:01.179510 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-13 01:21:01.179517 | orchestrator | 2026-03-13 01:21:01.179525 | orchestrator | TASK [Create report output directory] ****************************************** 2026-03-13 01:21:01.179533 | orchestrator | Friday 13 March 2026 01:20:58 +0000 (0:00:00.533) 0:00:01.775 ********** 2026-03-13 01:21:01.179540 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-13 01:21:01.179548 | orchestrator | 2026-03-13 01:21:01.179556 | orchestrator | TASK [Define report vars] ****************************************************** 2026-03-13 01:21:01.179564 | orchestrator | Friday 13 March 2026 01:20:58 +0000 (0:00:00.708) 0:00:02.484 ********** 2026-03-13 01:21:01.179571 | orchestrator | ok: [testbed-node-3] 2026-03-13 01:21:01.179580 | orchestrator | 2026-03-13 01:21:01.179586 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-03-13 01:21:01.179593 | orchestrator | Friday 13 March 2026 01:20:59 +0000 (0:00:00.130) 0:00:02.615 ********** 2026-03-13 01:21:01.179600 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:21:01.179607 | orchestrator | 2026-03-13 01:21:01.179614 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-03-13 01:21:01.179621 | orchestrator | Friday 13 March 2026 01:20:59 +0000 (0:00:00.114) 0:00:02.730 ********** 2026-03-13 01:21:01.179629 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:21:01.179636 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:21:01.179644 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:21:01.179651 | orchestrator | 2026-03-13 01:21:01.179659 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-03-13 01:21:01.179667 | orchestrator | Friday 13 March 2026 01:20:59 +0000 (0:00:00.305) 0:00:03.035 ********** 2026-03-13 01:21:01.179751 | orchestrator | ok: [testbed-node-3] 2026-03-13 01:21:01.179761 | orchestrator | 2026-03-13 01:21:01.179769 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-03-13 01:21:01.179777 | orchestrator | Friday 13 March 2026 01:20:59 +0000 (0:00:00.148) 0:00:03.184 ********** 2026-03-13 01:21:01.179784 | orchestrator | ok: [testbed-node-3] 2026-03-13 01:21:01.179791 | orchestrator | ok: [testbed-node-4] 2026-03-13 01:21:01.179798 | orchestrator | ok: [testbed-node-5] 2026-03-13 01:21:01.179806 | orchestrator | 2026-03-13 01:21:01.179813 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-03-13 01:21:01.179821 | orchestrator | Friday 13 March 2026 01:20:59 +0000 (0:00:00.328) 0:00:03.512 ********** 2026-03-13 01:21:01.179829 | orchestrator | ok: [testbed-node-3] 2026-03-13 01:21:01.179837 | orchestrator | 2026-03-13 01:21:01.179845 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-13 01:21:01.179854 | orchestrator | Friday 13 March 2026 01:21:00 +0000 (0:00:00.774) 0:00:04.287 ********** 2026-03-13 01:21:01.179862 | orchestrator | ok: [testbed-node-3] 2026-03-13 01:21:01.179870 | orchestrator | ok: [testbed-node-4] 2026-03-13 01:21:01.179877 | orchestrator | ok: [testbed-node-5] 2026-03-13 01:21:01.179885 | orchestrator | 2026-03-13 01:21:01.179893 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-03-13 01:21:01.179901 | orchestrator | Friday 13 March 2026 01:21:00 +0000 (0:00:00.283) 0:00:04.571 ********** 2026-03-13 01:21:01.179911 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7ee25c643a30b633b1368401e6e4de5942ee076e17d5bbd74c5941b305329393', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-03-13 01:21:01.179922 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ece77caeeaf3e9e298270291ad318fae8a7547dc21313c3f56dc4ce84cf4ca6f', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-03-13 01:21:01.179933 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0c4aefb7f08637920b45a8993eb975a3cc5d26c2554091a8bec9d1840f8555c0', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-03-13 01:21:01.179956 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0c2d2c6ed55f465060f8929d480c6f50f439159d9e6172ad96e2a7547e1a5bef', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2026-03-13 01:21:01.179970 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c318edf15814f90127e149b00eb0791ef1030a661779d38dc85fba5c7930d626', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2026-03-13 01:21:01.179996 | orchestrator | skipping: [testbed-node-3] => (item={'id': '44a6a3a6b591e9eece67ba9a10df91566b3455582342017805de9134e96011d4', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2026-03-13 01:21:01.180006 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e7e7e57ca796b17fbaceb2fdf5560bfc06b87e20dfcfefaf2599136b3a5eaf43', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 15 minutes (healthy)'})  2026-03-13 01:21:01.180013 | orchestrator | skipping: [testbed-node-3] => (item={'id': '800f0e9fd373e7ca6985f64d44785e8729d373e0f73f041d8e6fa0d8b2706fc0', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2026-03-13 01:21:01.180023 | orchestrator | skipping: [testbed-node-3] => (item={'id': '2e9b789234cbd1000fcfc161cc21327523e22088051ade1f59c9bc0f0348daf1', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 24 minutes'})  2026-03-13 01:21:01.180039 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1d3b528c0da4051912e165d9b37470aec3605b78faf3c2521ffe7d750a14097e', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 24 minutes'})  2026-03-13 01:21:01.180049 | orchestrator | ok: [testbed-node-3] => (item={'id': 'e41f49714921a7559f7c65162840ac198ebd49bf364caaecc7d47015d1fbf096', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 25 minutes'}) 2026-03-13 01:21:01.180057 | orchestrator | ok: [testbed-node-3] => (item={'id': 'aeede09ea44974dd270c03b2616b56162321678857cde836b09972a77b06f9b1', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 25 minutes'}) 2026-03-13 01:21:01.180065 | orchestrator | skipping: [testbed-node-3] => (item={'id': '24860fd03c6c4ca821a19380677ffa8a955df5c8006a64204832e9e9eeceab7c', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 29 minutes'})  2026-03-13 01:21:01.180072 | orchestrator | skipping: [testbed-node-3] => (item={'id': '600fa8465589ecd409f2200587bcca1f69aee9dc8ea01b81e02dd4c6cb162df6', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2026-03-13 01:21:01.180082 | orchestrator | skipping: [testbed-node-3] => (item={'id': '58dd8563488d83f458c366a6953904e439392377137e3add35ab89deb63f3f26', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 31 minutes (healthy)'})  2026-03-13 01:21:01.180091 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ca0e9b79c7a0598a8be2345dac418230bd3892376af929a3512425f5cc87d597', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 31 minutes'})  2026-03-13 01:21:01.180098 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a42b00d705a6b521f217c92ca90657e731e7b5792b35e6b75d3cb7471dd16ac5', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2026-03-13 01:21:01.180106 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1ac713f5cac41c86e37f1b9ac5a693b36402c3c018643100edb4c9dfd6d19be7', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2026-03-13 01:21:01.180112 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2e03a3df8cd6c61b45b715bdd9998f1cf2e562d564a0a1f1bf9b3c5ac0dea337', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-03-13 01:21:01.180119 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a3020d7016f34dc6fb876d234e584f1569178cdc70a8b5cac7771700c26f40b1', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-03-13 01:21:01.180132 | orchestrator | skipping: [testbed-node-4] => (item={'id': '6191acc9abcc35d22acc124208704c70ba2ecf24b736f701fdc0735036cd202b', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-03-13 01:21:01.180145 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'dd3617d69f18fa8d1c742f62648b53f47476c5c5a58c947bfb62f5f9f9c14187', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2026-03-13 01:21:01.364093 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b526c4876a5bf3b30692895cee3aedd6cd141fdce7cfba746c0751c6a57e95ff', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2026-03-13 01:21:01.364202 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'eefed5017638105f6caf661da2bad0f8be221ed14ebec06659c77bf6c0450bdf', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2026-03-13 01:21:01.364214 | orchestrator | skipping: [testbed-node-4] => (item={'id': '4e6433d98e242fd85b7becdd15b6ea7cca0d88d645f18c2a258d9fdd33ba814a', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 15 minutes (healthy)'})  2026-03-13 01:21:01.364221 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e8335402acda3d0458c813d9769fe313c86b236382a301c79da1e735a86fa40b', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2026-03-13 01:21:01.364228 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c83bd9ba0106135f31bea76447cb2a96568ab04ac697bb57894789b158f726ac', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 24 minutes'})  2026-03-13 01:21:01.364236 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b835359e9a3899316c58d3fbfe4b5e466feef0daf4600f000dc30e6986cbd9d1', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 24 minutes'})  2026-03-13 01:21:01.364246 | orchestrator | ok: [testbed-node-4] => (item={'id': '06f9a06cedcf0100697c380d6d3b0269818faef10c36d21c9c1d32e2f98ae851', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 25 minutes'}) 2026-03-13 01:21:01.364253 | orchestrator | ok: [testbed-node-4] => (item={'id': '787ea2c027ef142e89c4d12bc7d3d410109e2a8e6959e5bdc07b6d1fb46de423', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 25 minutes'}) 2026-03-13 01:21:01.364260 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'cecedcec89628525eeea1aaaac507509018fd4d93d04737a88b9083d2acb5447', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 29 minutes'})  2026-03-13 01:21:01.364266 | orchestrator | skipping: [testbed-node-4] => (item={'id': '531994b6f540a0e23b8c72fed909af73f182e35fb52fab2700041e7cd8716bc3', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2026-03-13 01:21:01.364274 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ec7e26506ef7e1a265c8fbcd5931b5ccb698f2c595bbff65fd7fc817c5912d37', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 31 minutes (healthy)'})  2026-03-13 01:21:01.364283 | orchestrator | skipping: [testbed-node-4] => (item={'id': '4f10565753de85cb87c6a5e37f5a6f9ae67ca3ab9e170aba5b694f679f62c8b2', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 31 minutes'})  2026-03-13 01:21:01.364303 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b9d9f6d72c3e04a68502b14715d6c82214efbcd70a28b5cd7ebb415c35cb50a3', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2026-03-13 01:21:01.364311 | orchestrator | skipping: [testbed-node-4] => (item={'id': '707e912ba852935489258eb3317db919b69bcdf3cbfa065eecd801ebe74263bf', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2026-03-13 01:21:01.364318 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd521c62ec453b5676e3f34a759e61c5196c78ef2ad1a7dbfdcc5a6e64a05becd', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-03-13 01:21:01.364343 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3a80f14fb7e47072dd3ccc0c8413f6c3fb7cbaa7047dd05603e0dd9d29145c38', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-03-13 01:21:01.364351 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd0d73bc74f3fa472fbb964fa48348a90096012d15b836e1c048b67aa2868ee3e', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-03-13 01:21:01.364359 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f4c1eb66a1987ad2f339525401f4670af72edfb15ef83664ec2c9885022b8293', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2026-03-13 01:21:01.364366 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0d69db4debafef9c1701eb3f81a3f2991ee5f2842712ab711eebf4b8566ad46d', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2026-03-13 01:21:01.364373 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e8844268a530c97c320e735e00e96770c3282c5860f59828dbb567c80eb5a9a9', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2026-03-13 01:21:01.364379 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8f60e662611d1bc9fc7eb63a9d71816abed85173dc6f5b03a9112aa58707d8ec', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 15 minutes (healthy)'})  2026-03-13 01:21:01.364385 | orchestrator | skipping: [testbed-node-5] => (item={'id': '145b1942df9f264b3c21e38b10a430bb4d3e80f23494077c3c3a5b1c2e7d7ba2', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2026-03-13 01:21:01.364392 | orchestrator | skipping: [testbed-node-5] => (item={'id': '1171e4f3d7d2c7e657c49f9e29b92d17f9026614ed194232e14efd0bf3d367db', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 24 minutes'})  2026-03-13 01:21:01.364398 | orchestrator | skipping: [testbed-node-5] => (item={'id': '583b2d41c463a9e210a2a2379a5910603320ed637a550acc131ebab2c015ba31', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 24 minutes'})  2026-03-13 01:21:01.364405 | orchestrator | ok: [testbed-node-5] => (item={'id': 'a1f601dd75b01ea2ea9870b25223260dd8f4d1b902f8f5f09a35f89cbe2ca012', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 25 minutes'}) 2026-03-13 01:21:01.364412 | orchestrator | ok: [testbed-node-5] => (item={'id': '4b78aff5d066c58982806b7855e8c90511638ded47195ff9fdcf9ba6ba04a342', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 25 minutes'}) 2026-03-13 01:21:01.364419 | orchestrator | skipping: [testbed-node-5] => (item={'id': '018802918cd47a819299ab97617d78247e8c059d77036f00524f1e33da10c672', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 29 minutes'})  2026-03-13 01:21:01.364426 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c8fe83da64c8844416b74a0cd1ca3f0d43b3709988f6aba07f1389c5149b82d3', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2026-03-13 01:21:01.364437 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0e7e50a0d95f3074c0ad1c1e2ea672e5c94d4310f130e895f8dc5c9dfe4d108c', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 31 minutes (healthy)'})  2026-03-13 01:21:01.364447 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e7f53257015b727415c8dfacac74613c4a027a41dd8c86c8470ef2540839b527', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 31 minutes'})  2026-03-13 01:21:01.364453 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4046012d0f1c2980530fcf28f8cdf4e4e911f114c86f40e3be52d6d9505cb0e9', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2026-03-13 01:21:01.364465 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7bcc9805604fbff9a9f215b5d6fd3d823cfdbf695a218f262a0171f1a0f31da6', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2026-03-13 01:21:14.603588 | orchestrator | 2026-03-13 01:21:14.603650 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-03-13 01:21:14.603662 | orchestrator | Friday 13 March 2026 01:21:01 +0000 (0:00:00.459) 0:00:05.031 ********** 2026-03-13 01:21:14.603670 | orchestrator | ok: [testbed-node-3] 2026-03-13 01:21:14.603677 | orchestrator | ok: [testbed-node-4] 2026-03-13 01:21:14.603684 | orchestrator | ok: [testbed-node-5] 2026-03-13 01:21:14.603720 | orchestrator | 2026-03-13 01:21:14.603729 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-03-13 01:21:14.603736 | orchestrator | Friday 13 March 2026 01:21:01 +0000 (0:00:00.292) 0:00:05.323 ********** 2026-03-13 01:21:14.603744 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:21:14.603752 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:21:14.603759 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:21:14.603766 | orchestrator | 2026-03-13 01:21:14.603774 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-03-13 01:21:14.603781 | orchestrator | Friday 13 March 2026 01:21:02 +0000 (0:00:00.480) 0:00:05.804 ********** 2026-03-13 01:21:14.603788 | orchestrator | ok: [testbed-node-3] 2026-03-13 01:21:14.603796 | orchestrator | ok: [testbed-node-4] 2026-03-13 01:21:14.603804 | orchestrator | ok: [testbed-node-5] 2026-03-13 01:21:14.603811 | orchestrator | 2026-03-13 01:21:14.603818 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-13 01:21:14.603822 | orchestrator | Friday 13 March 2026 01:21:02 +0000 (0:00:00.308) 0:00:06.112 ********** 2026-03-13 01:21:14.603827 | orchestrator | ok: [testbed-node-3] 2026-03-13 01:21:14.603831 | orchestrator | ok: [testbed-node-4] 2026-03-13 01:21:14.603835 | orchestrator | ok: [testbed-node-5] 2026-03-13 01:21:14.603840 | orchestrator | 2026-03-13 01:21:14.603844 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-03-13 01:21:14.603849 | orchestrator | Friday 13 March 2026 01:21:02 +0000 (0:00:00.304) 0:00:06.416 ********** 2026-03-13 01:21:14.603853 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-03-13 01:21:14.603859 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-03-13 01:21:14.603863 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:21:14.603867 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-03-13 01:21:14.603872 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-03-13 01:21:14.603876 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:21:14.603881 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-03-13 01:21:14.603885 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-03-13 01:21:14.603889 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:21:14.603893 | orchestrator | 2026-03-13 01:21:14.603898 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-03-13 01:21:14.603914 | orchestrator | Friday 13 March 2026 01:21:03 +0000 (0:00:00.303) 0:00:06.720 ********** 2026-03-13 01:21:14.603919 | orchestrator | ok: [testbed-node-3] 2026-03-13 01:21:14.603923 | orchestrator | ok: [testbed-node-4] 2026-03-13 01:21:14.603927 | orchestrator | ok: [testbed-node-5] 2026-03-13 01:21:14.603931 | orchestrator | 2026-03-13 01:21:14.603936 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-03-13 01:21:14.603940 | orchestrator | Friday 13 March 2026 01:21:03 +0000 (0:00:00.448) 0:00:07.168 ********** 2026-03-13 01:21:14.603944 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:21:14.603949 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:21:14.603953 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:21:14.603957 | orchestrator | 2026-03-13 01:21:14.603961 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-03-13 01:21:14.603966 | orchestrator | Friday 13 March 2026 01:21:03 +0000 (0:00:00.297) 0:00:07.466 ********** 2026-03-13 01:21:14.603970 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:21:14.603974 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:21:14.603978 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:21:14.603983 | orchestrator | 2026-03-13 01:21:14.603987 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-03-13 01:21:14.603991 | orchestrator | Friday 13 March 2026 01:21:04 +0000 (0:00:00.273) 0:00:07.740 ********** 2026-03-13 01:21:14.603996 | orchestrator | ok: [testbed-node-3] 2026-03-13 01:21:14.604000 | orchestrator | ok: [testbed-node-4] 2026-03-13 01:21:14.604004 | orchestrator | ok: [testbed-node-5] 2026-03-13 01:21:14.604008 | orchestrator | 2026-03-13 01:21:14.604013 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-13 01:21:14.604017 | orchestrator | Friday 13 March 2026 01:21:04 +0000 (0:00:00.274) 0:00:08.015 ********** 2026-03-13 01:21:14.604021 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:21:14.604026 | orchestrator | 2026-03-13 01:21:14.604030 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-13 01:21:14.604034 | orchestrator | Friday 13 March 2026 01:21:05 +0000 (0:00:00.654) 0:00:08.669 ********** 2026-03-13 01:21:14.604038 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:21:14.604043 | orchestrator | 2026-03-13 01:21:14.604047 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-13 01:21:14.604051 | orchestrator | Friday 13 March 2026 01:21:05 +0000 (0:00:00.248) 0:00:08.918 ********** 2026-03-13 01:21:14.604056 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:21:14.604060 | orchestrator | 2026-03-13 01:21:14.604064 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-13 01:21:14.604068 | orchestrator | Friday 13 March 2026 01:21:05 +0000 (0:00:00.260) 0:00:09.178 ********** 2026-03-13 01:21:14.604073 | orchestrator | 2026-03-13 01:21:14.604077 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-13 01:21:14.604081 | orchestrator | Friday 13 March 2026 01:21:05 +0000 (0:00:00.064) 0:00:09.243 ********** 2026-03-13 01:21:14.604085 | orchestrator | 2026-03-13 01:21:14.604090 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-13 01:21:14.604103 | orchestrator | Friday 13 March 2026 01:21:05 +0000 (0:00:00.065) 0:00:09.309 ********** 2026-03-13 01:21:14.604115 | orchestrator | 2026-03-13 01:21:14.604125 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-13 01:21:14.604129 | orchestrator | Friday 13 March 2026 01:21:05 +0000 (0:00:00.070) 0:00:09.380 ********** 2026-03-13 01:21:14.604137 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:21:14.604145 | orchestrator | 2026-03-13 01:21:14.604153 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-03-13 01:21:14.604160 | orchestrator | Friday 13 March 2026 01:21:06 +0000 (0:00:00.257) 0:00:09.637 ********** 2026-03-13 01:21:14.604168 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:21:14.604176 | orchestrator | 2026-03-13 01:21:14.604183 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-13 01:21:14.604197 | orchestrator | Friday 13 March 2026 01:21:06 +0000 (0:00:00.239) 0:00:09.877 ********** 2026-03-13 01:21:14.604206 | orchestrator | ok: [testbed-node-3] 2026-03-13 01:21:14.604214 | orchestrator | ok: [testbed-node-4] 2026-03-13 01:21:14.604220 | orchestrator | ok: [testbed-node-5] 2026-03-13 01:21:14.604225 | orchestrator | 2026-03-13 01:21:14.604249 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-03-13 01:21:14.604257 | orchestrator | Friday 13 March 2026 01:21:06 +0000 (0:00:00.312) 0:00:10.189 ********** 2026-03-13 01:21:14.604265 | orchestrator | ok: [testbed-node-3] 2026-03-13 01:21:14.604272 | orchestrator | 2026-03-13 01:21:14.604280 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-03-13 01:21:14.604287 | orchestrator | Friday 13 March 2026 01:21:07 +0000 (0:00:00.605) 0:00:10.795 ********** 2026-03-13 01:21:14.604295 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-13 01:21:14.604303 | orchestrator | 2026-03-13 01:21:14.604311 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-03-13 01:21:14.604316 | orchestrator | Friday 13 March 2026 01:21:08 +0000 (0:00:01.566) 0:00:12.361 ********** 2026-03-13 01:21:14.604321 | orchestrator | ok: [testbed-node-3] 2026-03-13 01:21:14.604326 | orchestrator | 2026-03-13 01:21:14.604331 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-03-13 01:21:14.604336 | orchestrator | Friday 13 March 2026 01:21:08 +0000 (0:00:00.138) 0:00:12.499 ********** 2026-03-13 01:21:14.604343 | orchestrator | ok: [testbed-node-3] 2026-03-13 01:21:14.604350 | orchestrator | 2026-03-13 01:21:14.604358 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-03-13 01:21:14.604365 | orchestrator | Friday 13 March 2026 01:21:09 +0000 (0:00:00.322) 0:00:12.821 ********** 2026-03-13 01:21:14.604373 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:21:14.604380 | orchestrator | 2026-03-13 01:21:14.604388 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-03-13 01:21:14.604395 | orchestrator | Friday 13 March 2026 01:21:09 +0000 (0:00:00.118) 0:00:12.940 ********** 2026-03-13 01:21:14.604403 | orchestrator | ok: [testbed-node-3] 2026-03-13 01:21:14.604411 | orchestrator | 2026-03-13 01:21:14.604417 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-13 01:21:14.604422 | orchestrator | Friday 13 March 2026 01:21:09 +0000 (0:00:00.130) 0:00:13.070 ********** 2026-03-13 01:21:14.604426 | orchestrator | ok: [testbed-node-3] 2026-03-13 01:21:14.604432 | orchestrator | ok: [testbed-node-4] 2026-03-13 01:21:14.604440 | orchestrator | ok: [testbed-node-5] 2026-03-13 01:21:14.604447 | orchestrator | 2026-03-13 01:21:14.604455 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-03-13 01:21:14.604462 | orchestrator | Friday 13 March 2026 01:21:09 +0000 (0:00:00.264) 0:00:13.334 ********** 2026-03-13 01:21:14.604470 | orchestrator | changed: [testbed-node-3] 2026-03-13 01:21:14.604478 | orchestrator | changed: [testbed-node-4] 2026-03-13 01:21:14.604485 | orchestrator | changed: [testbed-node-5] 2026-03-13 01:21:14.604493 | orchestrator | 2026-03-13 01:21:14.604500 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-03-13 01:21:14.604508 | orchestrator | Friday 13 March 2026 01:21:12 +0000 (0:00:02.645) 0:00:15.980 ********** 2026-03-13 01:21:14.604516 | orchestrator | ok: [testbed-node-3] 2026-03-13 01:21:14.604523 | orchestrator | ok: [testbed-node-4] 2026-03-13 01:21:14.604529 | orchestrator | ok: [testbed-node-5] 2026-03-13 01:21:14.604534 | orchestrator | 2026-03-13 01:21:14.604539 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-03-13 01:21:14.604544 | orchestrator | Friday 13 March 2026 01:21:12 +0000 (0:00:00.305) 0:00:16.285 ********** 2026-03-13 01:21:14.604548 | orchestrator | ok: [testbed-node-3] 2026-03-13 01:21:14.604554 | orchestrator | ok: [testbed-node-4] 2026-03-13 01:21:14.604563 | orchestrator | ok: [testbed-node-5] 2026-03-13 01:21:14.604574 | orchestrator | 2026-03-13 01:21:14.604581 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-03-13 01:21:14.604595 | orchestrator | Friday 13 March 2026 01:21:13 +0000 (0:00:00.518) 0:00:16.804 ********** 2026-03-13 01:21:14.604603 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:21:14.604612 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:21:14.604617 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:21:14.604622 | orchestrator | 2026-03-13 01:21:14.604627 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-03-13 01:21:14.604632 | orchestrator | Friday 13 March 2026 01:21:13 +0000 (0:00:00.301) 0:00:17.106 ********** 2026-03-13 01:21:14.604637 | orchestrator | ok: [testbed-node-3] 2026-03-13 01:21:14.604641 | orchestrator | ok: [testbed-node-4] 2026-03-13 01:21:14.604646 | orchestrator | ok: [testbed-node-5] 2026-03-13 01:21:14.604651 | orchestrator | 2026-03-13 01:21:14.604656 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-03-13 01:21:14.604661 | orchestrator | Friday 13 March 2026 01:21:13 +0000 (0:00:00.475) 0:00:17.581 ********** 2026-03-13 01:21:14.604666 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:21:14.604671 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:21:14.604676 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:21:14.604681 | orchestrator | 2026-03-13 01:21:14.604686 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-03-13 01:21:14.604704 | orchestrator | Friday 13 March 2026 01:21:14 +0000 (0:00:00.309) 0:00:17.891 ********** 2026-03-13 01:21:14.604709 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:21:14.604714 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:21:14.604719 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:21:14.604724 | orchestrator | 2026-03-13 01:21:14.604734 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-13 01:21:22.062587 | orchestrator | Friday 13 March 2026 01:21:14 +0000 (0:00:00.296) 0:00:18.187 ********** 2026-03-13 01:21:22.062781 | orchestrator | ok: [testbed-node-3] 2026-03-13 01:21:22.063516 | orchestrator | ok: [testbed-node-4] 2026-03-13 01:21:22.063561 | orchestrator | ok: [testbed-node-5] 2026-03-13 01:21:22.063570 | orchestrator | 2026-03-13 01:21:22.063591 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-03-13 01:21:22.063606 | orchestrator | Friday 13 March 2026 01:21:15 +0000 (0:00:00.484) 0:00:18.671 ********** 2026-03-13 01:21:22.063613 | orchestrator | ok: [testbed-node-3] 2026-03-13 01:21:22.063619 | orchestrator | ok: [testbed-node-4] 2026-03-13 01:21:22.063626 | orchestrator | ok: [testbed-node-5] 2026-03-13 01:21:22.063632 | orchestrator | 2026-03-13 01:21:22.063638 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-03-13 01:21:22.063644 | orchestrator | Friday 13 March 2026 01:21:15 +0000 (0:00:00.804) 0:00:19.476 ********** 2026-03-13 01:21:22.063650 | orchestrator | ok: [testbed-node-3] 2026-03-13 01:21:22.063656 | orchestrator | ok: [testbed-node-4] 2026-03-13 01:21:22.063662 | orchestrator | ok: [testbed-node-5] 2026-03-13 01:21:22.063668 | orchestrator | 2026-03-13 01:21:22.063674 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-03-13 01:21:22.063680 | orchestrator | Friday 13 March 2026 01:21:16 +0000 (0:00:00.305) 0:00:19.781 ********** 2026-03-13 01:21:22.063687 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:21:22.063758 | orchestrator | skipping: [testbed-node-4] 2026-03-13 01:21:22.063768 | orchestrator | skipping: [testbed-node-5] 2026-03-13 01:21:22.063774 | orchestrator | 2026-03-13 01:21:22.063780 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-03-13 01:21:22.063787 | orchestrator | Friday 13 March 2026 01:21:16 +0000 (0:00:00.276) 0:00:20.058 ********** 2026-03-13 01:21:22.063793 | orchestrator | ok: [testbed-node-3] 2026-03-13 01:21:22.063799 | orchestrator | ok: [testbed-node-4] 2026-03-13 01:21:22.063805 | orchestrator | ok: [testbed-node-5] 2026-03-13 01:21:22.063811 | orchestrator | 2026-03-13 01:21:22.063816 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-03-13 01:21:22.063823 | orchestrator | Friday 13 March 2026 01:21:16 +0000 (0:00:00.482) 0:00:20.540 ********** 2026-03-13 01:21:22.063857 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-13 01:21:22.063863 | orchestrator | 2026-03-13 01:21:22.063869 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-03-13 01:21:22.063875 | orchestrator | Friday 13 March 2026 01:21:17 +0000 (0:00:00.255) 0:00:20.795 ********** 2026-03-13 01:21:22.063881 | orchestrator | skipping: [testbed-node-3] 2026-03-13 01:21:22.063887 | orchestrator | 2026-03-13 01:21:22.063906 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-13 01:21:22.063918 | orchestrator | Friday 13 March 2026 01:21:17 +0000 (0:00:00.236) 0:00:21.031 ********** 2026-03-13 01:21:22.063925 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-13 01:21:22.063930 | orchestrator | 2026-03-13 01:21:22.063936 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-13 01:21:22.063942 | orchestrator | Friday 13 March 2026 01:21:18 +0000 (0:00:01.539) 0:00:22.571 ********** 2026-03-13 01:21:22.063948 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-13 01:21:22.063954 | orchestrator | 2026-03-13 01:21:22.063960 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-13 01:21:22.063967 | orchestrator | Friday 13 March 2026 01:21:19 +0000 (0:00:00.252) 0:00:22.824 ********** 2026-03-13 01:21:22.063973 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-13 01:21:22.063979 | orchestrator | 2026-03-13 01:21:22.063985 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-13 01:21:22.063991 | orchestrator | Friday 13 March 2026 01:21:19 +0000 (0:00:00.263) 0:00:23.088 ********** 2026-03-13 01:21:22.063996 | orchestrator | 2026-03-13 01:21:22.064001 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-13 01:21:22.064007 | orchestrator | Friday 13 March 2026 01:21:19 +0000 (0:00:00.067) 0:00:23.156 ********** 2026-03-13 01:21:22.064013 | orchestrator | 2026-03-13 01:21:22.064019 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-13 01:21:22.064025 | orchestrator | Friday 13 March 2026 01:21:19 +0000 (0:00:00.094) 0:00:23.251 ********** 2026-03-13 01:21:22.064031 | orchestrator | 2026-03-13 01:21:22.064037 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-03-13 01:21:22.064044 | orchestrator | Friday 13 March 2026 01:21:19 +0000 (0:00:00.074) 0:00:23.325 ********** 2026-03-13 01:21:22.064049 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-13 01:21:22.064055 | orchestrator | 2026-03-13 01:21:22.064060 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-13 01:21:22.064081 | orchestrator | Friday 13 March 2026 01:21:21 +0000 (0:00:01.527) 0:00:24.852 ********** 2026-03-13 01:21:22.064087 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-03-13 01:21:22.064093 | orchestrator |  "msg": [ 2026-03-13 01:21:22.064100 | orchestrator |  "Validator run completed.", 2026-03-13 01:21:22.064106 | orchestrator |  "You can find the report file here:", 2026-03-13 01:21:22.064113 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-03-13T01:20:57+00:00-report.json", 2026-03-13 01:21:22.064120 | orchestrator |  "on the following host:", 2026-03-13 01:21:22.064126 | orchestrator |  "testbed-manager" 2026-03-13 01:21:22.064132 | orchestrator |  ] 2026-03-13 01:21:22.064138 | orchestrator | } 2026-03-13 01:21:22.064144 | orchestrator | 2026-03-13 01:21:22.064150 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 01:21:22.064158 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-13 01:21:22.064166 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-13 01:21:22.064194 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-13 01:21:22.064209 | orchestrator | 2026-03-13 01:21:22.064215 | orchestrator | 2026-03-13 01:21:22.064221 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 01:21:22.064227 | orchestrator | Friday 13 March 2026 01:21:21 +0000 (0:00:00.550) 0:00:25.403 ********** 2026-03-13 01:21:22.064233 | orchestrator | =============================================================================== 2026-03-13 01:21:22.064239 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.65s 2026-03-13 01:21:22.064244 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.57s 2026-03-13 01:21:22.064250 | orchestrator | Aggregate test results step one ----------------------------------------- 1.54s 2026-03-13 01:21:22.064255 | orchestrator | Write report file ------------------------------------------------------- 1.53s 2026-03-13 01:21:22.064261 | orchestrator | Get timestamp for report file ------------------------------------------- 0.81s 2026-03-13 01:21:22.064267 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.81s 2026-03-13 01:21:22.064273 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.77s 2026-03-13 01:21:22.064279 | orchestrator | Create report output directory ------------------------------------------ 0.71s 2026-03-13 01:21:22.064284 | orchestrator | Aggregate test results step one ----------------------------------------- 0.65s 2026-03-13 01:21:22.064289 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.61s 2026-03-13 01:21:22.064296 | orchestrator | Print report file information ------------------------------------------- 0.55s 2026-03-13 01:21:22.064302 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.53s 2026-03-13 01:21:22.064308 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.52s 2026-03-13 01:21:22.064314 | orchestrator | Prepare test data ------------------------------------------------------- 0.48s 2026-03-13 01:21:22.064320 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.48s 2026-03-13 01:21:22.064326 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.48s 2026-03-13 01:21:22.064332 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.48s 2026-03-13 01:21:22.064338 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.46s 2026-03-13 01:21:22.064344 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.45s 2026-03-13 01:21:22.064350 | orchestrator | Calculate OSD devices for each host ------------------------------------- 0.33s 2026-03-13 01:21:22.365850 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-03-13 01:21:22.371564 | orchestrator | + set -e 2026-03-13 01:21:22.371630 | orchestrator | + source /opt/manager-vars.sh 2026-03-13 01:21:22.371636 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-13 01:21:22.371641 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-13 01:21:22.371646 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-13 01:21:22.372507 | orchestrator | ++ CEPH_VERSION=reef 2026-03-13 01:21:22.372541 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-13 01:21:22.372547 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-13 01:21:22.372551 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-13 01:21:22.372556 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-13 01:21:22.372560 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-13 01:21:22.372564 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-13 01:21:22.372567 | orchestrator | ++ export ARA=false 2026-03-13 01:21:22.372572 | orchestrator | ++ ARA=false 2026-03-13 01:21:22.372576 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-13 01:21:22.372580 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-13 01:21:22.372584 | orchestrator | ++ export TEMPEST=true 2026-03-13 01:21:22.372587 | orchestrator | ++ TEMPEST=true 2026-03-13 01:21:22.372591 | orchestrator | ++ export IS_ZUUL=true 2026-03-13 01:21:22.372595 | orchestrator | ++ IS_ZUUL=true 2026-03-13 01:21:22.372598 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.64 2026-03-13 01:21:22.372602 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.64 2026-03-13 01:21:22.372606 | orchestrator | ++ export EXTERNAL_API=false 2026-03-13 01:21:22.372610 | orchestrator | ++ EXTERNAL_API=false 2026-03-13 01:21:22.372632 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-13 01:21:22.372636 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-13 01:21:22.372640 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-13 01:21:22.372643 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-13 01:21:22.372647 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-13 01:21:22.372650 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-13 01:21:22.372654 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-03-13 01:21:22.372658 | orchestrator | + source /etc/os-release 2026-03-13 01:21:22.372662 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.4 LTS' 2026-03-13 01:21:22.372666 | orchestrator | ++ NAME=Ubuntu 2026-03-13 01:21:22.372669 | orchestrator | ++ VERSION_ID=24.04 2026-03-13 01:21:22.372673 | orchestrator | ++ VERSION='24.04.4 LTS (Noble Numbat)' 2026-03-13 01:21:22.372678 | orchestrator | ++ VERSION_CODENAME=noble 2026-03-13 01:21:22.372682 | orchestrator | ++ ID=ubuntu 2026-03-13 01:21:22.372685 | orchestrator | ++ ID_LIKE=debian 2026-03-13 01:21:22.372689 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-03-13 01:21:22.372708 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-03-13 01:21:22.372724 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-03-13 01:21:22.372728 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-03-13 01:21:22.372732 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-03-13 01:21:22.372736 | orchestrator | ++ LOGO=ubuntu-logo 2026-03-13 01:21:22.372740 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-03-13 01:21:22.372744 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-03-13 01:21:22.372750 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-03-13 01:21:22.399594 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-03-13 01:21:43.529690 | orchestrator | 2026-03-13 01:21:43.529870 | orchestrator | # Status of Elasticsearch 2026-03-13 01:21:43.529879 | orchestrator | 2026-03-13 01:21:43.529884 | orchestrator | + pushd /opt/configuration/contrib 2026-03-13 01:21:43.529889 | orchestrator | + echo 2026-03-13 01:21:43.529893 | orchestrator | + echo '# Status of Elasticsearch' 2026-03-13 01:21:43.529897 | orchestrator | + echo 2026-03-13 01:21:43.529902 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-03-13 01:21:43.694133 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-03-13 01:21:43.694216 | orchestrator | 2026-03-13 01:21:43.694227 | orchestrator | # Status of MariaDB 2026-03-13 01:21:43.694235 | orchestrator | 2026-03-13 01:21:43.694241 | orchestrator | + echo 2026-03-13 01:21:43.694248 | orchestrator | + echo '# Status of MariaDB' 2026-03-13 01:21:43.694255 | orchestrator | + echo 2026-03-13 01:21:43.694539 | orchestrator | ++ semver latest 10.0.0-0 2026-03-13 01:21:43.750267 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-13 01:21:43.750359 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-13 01:21:43.750368 | orchestrator | + osism status database 2026-03-13 01:21:45.809693 | orchestrator | 2026-03-13 01:21:45 | ERROR  | Unable to get ansible vault password 2026-03-13 01:21:45.809825 | orchestrator | 2026-03-13 01:21:45 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-13 01:21:45.809838 | orchestrator | 2026-03-13 01:21:45 | ERROR  | Dropping encrypted entries 2026-03-13 01:21:45.841313 | orchestrator | 2026-03-13 01:21:45 | INFO  | Connecting to MariaDB at 192.168.16.9 as root_shard_0... 2026-03-13 01:21:45.854951 | orchestrator | 2026-03-13 01:21:45 | INFO  | Cluster Status: Primary 2026-03-13 01:21:45.855028 | orchestrator | 2026-03-13 01:21:45 | INFO  | Connected: ON 2026-03-13 01:21:45.855107 | orchestrator | 2026-03-13 01:21:45 | INFO  | Ready: ON 2026-03-13 01:21:45.855116 | orchestrator | 2026-03-13 01:21:45 | INFO  | Cluster Size: 3 2026-03-13 01:21:45.855124 | orchestrator | 2026-03-13 01:21:45 | INFO  | Local State: Synced 2026-03-13 01:21:45.855158 | orchestrator | 2026-03-13 01:21:45 | INFO  | Cluster State UUID: 56012ae5-1e77-11f1-b2ed-c32944a3ff6b 2026-03-13 01:21:45.855166 | orchestrator | 2026-03-13 01:21:45 | INFO  | Cluster Members: 192.168.16.11:3306,192.168.16.12:3306,192.168.16.10:3306 2026-03-13 01:21:45.855174 | orchestrator | 2026-03-13 01:21:45 | INFO  | Galera Version: 26.4.25(r7387a566) 2026-03-13 01:21:45.855181 | orchestrator | 2026-03-13 01:21:45 | INFO  | Local Node UUID: 88d31279-1e77-11f1-aa1a-bb93584fade3 2026-03-13 01:21:45.855188 | orchestrator | 2026-03-13 01:21:45 | INFO  | Flow Control Paused: 0.00% 2026-03-13 01:21:45.855203 | orchestrator | 2026-03-13 01:21:45 | INFO  | Recv Queue Avg: 0.0104167 2026-03-13 01:21:45.855210 | orchestrator | 2026-03-13 01:21:45 | INFO  | Send Queue Avg: 0 2026-03-13 01:21:45.855217 | orchestrator | 2026-03-13 01:21:45 | INFO  | Transactions: 5029 local commits, 7214 replicated, 96 received 2026-03-13 01:21:45.855223 | orchestrator | 2026-03-13 01:21:45 | INFO  | Conflicts: 0 cert failures, 0 bf aborts 2026-03-13 01:21:45.855230 | orchestrator | 2026-03-13 01:21:45 | INFO  | MariaDB Uptime: 24 minutes, 48 seconds 2026-03-13 01:21:45.855237 | orchestrator | 2026-03-13 01:21:45 | INFO  | Threads: 130 connected, 1 running 2026-03-13 01:21:45.855244 | orchestrator | 2026-03-13 01:21:45 | INFO  | Queries: 273273 total, 0 slow 2026-03-13 01:21:45.855349 | orchestrator | 2026-03-13 01:21:45 | INFO  | Aborted Connects: 141 2026-03-13 01:21:45.855359 | orchestrator | 2026-03-13 01:21:45 | INFO  | MariaDB Galera Cluster validation PASSED 2026-03-13 01:21:46.153184 | orchestrator | 2026-03-13 01:21:46.153273 | orchestrator | # Status of Prometheus 2026-03-13 01:21:46.153283 | orchestrator | 2026-03-13 01:21:46.153291 | orchestrator | + echo 2026-03-13 01:21:46.153297 | orchestrator | + echo '# Status of Prometheus' 2026-03-13 01:21:46.153303 | orchestrator | + echo 2026-03-13 01:21:46.153310 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-03-13 01:21:46.207644 | orchestrator | Unauthorized 2026-03-13 01:21:46.210587 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-03-13 01:21:46.262238 | orchestrator | Unauthorized 2026-03-13 01:21:46.265561 | orchestrator | 2026-03-13 01:21:46.265646 | orchestrator | # Status of RabbitMQ 2026-03-13 01:21:46.265657 | orchestrator | 2026-03-13 01:21:46.265664 | orchestrator | + echo 2026-03-13 01:21:46.265670 | orchestrator | + echo '# Status of RabbitMQ' 2026-03-13 01:21:46.265676 | orchestrator | + echo 2026-03-13 01:21:46.266004 | orchestrator | ++ semver latest 10.0.0-0 2026-03-13 01:21:46.323473 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-13 01:21:46.323545 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-13 01:21:46.323552 | orchestrator | + osism status messaging 2026-03-13 01:22:07.007265 | orchestrator | 2026-03-13 01:22:07 | ERROR  | Unable to get ansible vault password 2026-03-13 01:22:07.007352 | orchestrator | 2026-03-13 01:22:07 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-13 01:22:07.007361 | orchestrator | 2026-03-13 01:22:07 | ERROR  | Dropping encrypted entries 2026-03-13 01:22:07.048309 | orchestrator | 2026-03-13 01:22:07 | INFO  | [testbed-node-0] Connecting to RabbitMQ Management API at 192.168.16.10:15672 as openstack... 2026-03-13 01:22:07.136107 | orchestrator | 2026-03-13 01:22:07 | INFO  | [testbed-node-0] RabbitMQ Version: 3.13.7 2026-03-13 01:22:07.136252 | orchestrator | 2026-03-13 01:22:07 | INFO  | [testbed-node-0] Erlang Version: 26.2.5.15 2026-03-13 01:22:07.136290 | orchestrator | 2026-03-13 01:22:07 | INFO  | [testbed-node-0] Cluster Name: rabbit@testbed-node-0 2026-03-13 01:22:07.136299 | orchestrator | 2026-03-13 01:22:07 | INFO  | [testbed-node-0] Cluster Size: 3 2026-03-13 01:22:07.136312 | orchestrator | 2026-03-13 01:22:07 | INFO  | [testbed-node-0] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-03-13 01:22:07.137072 | orchestrator | 2026-03-13 01:22:07 | INFO  | [testbed-node-0] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-03-13 01:22:07.137323 | orchestrator | 2026-03-13 01:22:07 | INFO  | [testbed-node-0] Partitions: None (healthy) 2026-03-13 01:22:07.138186 | orchestrator | 2026-03-13 01:22:07 | INFO  | [testbed-node-0] Connections: 203, Channels: 202, Queues: 173 2026-03-13 01:22:07.138229 | orchestrator | 2026-03-13 01:22:07 | INFO  | [testbed-node-0] Messages: 218 total, 218 ready, 0 unacked 2026-03-13 01:22:07.138811 | orchestrator | 2026-03-13 01:22:07 | INFO  | [testbed-node-0] Message Rates: 7.8/s publish, 7.8/s deliver 2026-03-13 01:22:07.139369 | orchestrator | 2026-03-13 01:22:07 | INFO  | [testbed-node-0] Disk Free: 58.2 GB (limit: 0.0 GB) 2026-03-13 01:22:07.139590 | orchestrator | 2026-03-13 01:22:07 | INFO  | [testbed-node-0] Memory Used: 0.18 GB (limit: 12.54 GB) 2026-03-13 01:22:07.140745 | orchestrator | 2026-03-13 01:22:07 | INFO  | [testbed-node-0] File Descriptors: 125/1024 2026-03-13 01:22:07.140787 | orchestrator | 2026-03-13 01:22:07 | INFO  | [testbed-node-0] Sockets: 79/832 2026-03-13 01:22:07.141202 | orchestrator | 2026-03-13 01:22:07 | INFO  | [testbed-node-1] Connecting to RabbitMQ Management API at 192.168.16.11:15672 as openstack... 2026-03-13 01:22:07.204962 | orchestrator | 2026-03-13 01:22:07 | INFO  | [testbed-node-1] RabbitMQ Version: 3.13.7 2026-03-13 01:22:07.205054 | orchestrator | 2026-03-13 01:22:07 | INFO  | [testbed-node-1] Erlang Version: 26.2.5.15 2026-03-13 01:22:07.205064 | orchestrator | 2026-03-13 01:22:07 | INFO  | [testbed-node-1] Cluster Name: rabbit@testbed-node-1 2026-03-13 01:22:07.205071 | orchestrator | 2026-03-13 01:22:07 | INFO  | [testbed-node-1] Cluster Size: 3 2026-03-13 01:22:07.205079 | orchestrator | 2026-03-13 01:22:07 | INFO  | [testbed-node-1] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-03-13 01:22:07.205087 | orchestrator | 2026-03-13 01:22:07 | INFO  | [testbed-node-1] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-03-13 01:22:07.205095 | orchestrator | 2026-03-13 01:22:07 | INFO  | [testbed-node-1] Partitions: None (healthy) 2026-03-13 01:22:07.205101 | orchestrator | 2026-03-13 01:22:07 | INFO  | [testbed-node-1] Connections: 203, Channels: 202, Queues: 173 2026-03-13 01:22:07.205119 | orchestrator | 2026-03-13 01:22:07 | INFO  | [testbed-node-1] Messages: 218 total, 218 ready, 0 unacked 2026-03-13 01:22:07.205127 | orchestrator | 2026-03-13 01:22:07 | INFO  | [testbed-node-1] Message Rates: 7.8/s publish, 7.8/s deliver 2026-03-13 01:22:07.205263 | orchestrator | 2026-03-13 01:22:07 | INFO  | [testbed-node-1] Disk Free: 58.4 GB (limit: 0.0 GB) 2026-03-13 01:22:07.206082 | orchestrator | 2026-03-13 01:22:07 | INFO  | [testbed-node-1] Memory Used: 0.18 GB (limit: 12.54 GB) 2026-03-13 01:22:07.206126 | orchestrator | 2026-03-13 01:22:07 | INFO  | [testbed-node-1] File Descriptors: 109/1024 2026-03-13 01:22:07.206133 | orchestrator | 2026-03-13 01:22:07 | INFO  | [testbed-node-1] Sockets: 61/832 2026-03-13 01:22:07.206201 | orchestrator | 2026-03-13 01:22:07 | INFO  | [testbed-node-2] Connecting to RabbitMQ Management API at 192.168.16.12:15672 as openstack... 2026-03-13 01:22:07.275166 | orchestrator | 2026-03-13 01:22:07 | INFO  | [testbed-node-2] RabbitMQ Version: 3.13.7 2026-03-13 01:22:07.275309 | orchestrator | 2026-03-13 01:22:07 | INFO  | [testbed-node-2] Erlang Version: 26.2.5.15 2026-03-13 01:22:07.275333 | orchestrator | 2026-03-13 01:22:07 | INFO  | [testbed-node-2] Cluster Name: rabbit@testbed-node-2 2026-03-13 01:22:07.275359 | orchestrator | 2026-03-13 01:22:07 | INFO  | [testbed-node-2] Cluster Size: 3 2026-03-13 01:22:07.275805 | orchestrator | 2026-03-13 01:22:07 | INFO  | [testbed-node-2] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-03-13 01:22:07.275886 | orchestrator | 2026-03-13 01:22:07 | INFO  | [testbed-node-2] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-03-13 01:22:07.275899 | orchestrator | 2026-03-13 01:22:07 | INFO  | [testbed-node-2] Partitions: None (healthy) 2026-03-13 01:22:07.276111 | orchestrator | 2026-03-13 01:22:07 | INFO  | [testbed-node-2] Connections: 203, Channels: 202, Queues: 173 2026-03-13 01:22:07.276231 | orchestrator | 2026-03-13 01:22:07 | INFO  | [testbed-node-2] Messages: 218 total, 218 ready, 0 unacked 2026-03-13 01:22:07.276888 | orchestrator | 2026-03-13 01:22:07 | INFO  | [testbed-node-2] Message Rates: 7.8/s publish, 7.8/s deliver 2026-03-13 01:22:07.276934 | orchestrator | 2026-03-13 01:22:07 | INFO  | [testbed-node-2] Disk Free: 58.4 GB (limit: 0.0 GB) 2026-03-13 01:22:07.276948 | orchestrator | 2026-03-13 01:22:07 | INFO  | [testbed-node-2] Memory Used: 0.18 GB (limit: 12.54 GB) 2026-03-13 01:22:07.277213 | orchestrator | 2026-03-13 01:22:07 | INFO  | [testbed-node-2] File Descriptors: 109/1024 2026-03-13 01:22:07.277388 | orchestrator | 2026-03-13 01:22:07 | INFO  | [testbed-node-2] Sockets: 63/832 2026-03-13 01:22:07.277496 | orchestrator | 2026-03-13 01:22:07 | INFO  | RabbitMQ Cluster validation PASSED 2026-03-13 01:22:07.608788 | orchestrator | 2026-03-13 01:22:07.608872 | orchestrator | # Status of Redis 2026-03-13 01:22:07.608880 | orchestrator | 2026-03-13 01:22:07.608885 | orchestrator | + echo 2026-03-13 01:22:07.608889 | orchestrator | + echo '# Status of Redis' 2026-03-13 01:22:07.608894 | orchestrator | + echo 2026-03-13 01:22:07.608900 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-03-13 01:22:07.615326 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.002059s;;;0.000000;10.000000 2026-03-13 01:22:07.616230 | orchestrator | 2026-03-13 01:22:07.616284 | orchestrator | # Create backup of MariaDB database 2026-03-13 01:22:07.616292 | orchestrator | 2026-03-13 01:22:07.616299 | orchestrator | + popd 2026-03-13 01:22:07.616306 | orchestrator | + echo 2026-03-13 01:22:07.616313 | orchestrator | + echo '# Create backup of MariaDB database' 2026-03-13 01:22:07.616319 | orchestrator | + echo 2026-03-13 01:22:07.616326 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-03-13 01:22:09.599509 | orchestrator | 2026-03-13 01:22:09 | INFO  | Prepare task for execution of mariadb_backup. 2026-03-13 01:22:09.653484 | orchestrator | 2026-03-13 01:22:09 | INFO  | Task e4135622-1eb6-4341-836a-821fbb302f2a (mariadb_backup) was prepared for execution. 2026-03-13 01:22:09.653554 | orchestrator | 2026-03-13 01:22:09 | INFO  | It takes a moment until task e4135622-1eb6-4341-836a-821fbb302f2a (mariadb_backup) has been started and output is visible here. 2026-03-13 01:23:24.362106 | orchestrator | 2026-03-13 01:23:24.362171 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-13 01:23:24.362182 | orchestrator | 2026-03-13 01:23:24.362188 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-13 01:23:24.362196 | orchestrator | Friday 13 March 2026 01:22:13 +0000 (0:00:00.171) 0:00:00.171 ********** 2026-03-13 01:23:24.362202 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:23:24.362210 | orchestrator | ok: [testbed-node-1] 2026-03-13 01:23:24.362217 | orchestrator | ok: [testbed-node-2] 2026-03-13 01:23:24.362224 | orchestrator | 2026-03-13 01:23:24.362232 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-13 01:23:24.362240 | orchestrator | Friday 13 March 2026 01:22:13 +0000 (0:00:00.320) 0:00:00.492 ********** 2026-03-13 01:23:24.362248 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-13 01:23:24.362256 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-13 01:23:24.362278 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-13 01:23:24.362282 | orchestrator | 2026-03-13 01:23:24.362287 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-13 01:23:24.362291 | orchestrator | 2026-03-13 01:23:24.362295 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-13 01:23:24.362300 | orchestrator | Friday 13 March 2026 01:22:14 +0000 (0:00:00.558) 0:00:01.050 ********** 2026-03-13 01:23:24.362304 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-13 01:23:24.362309 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-13 01:23:24.362313 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-13 01:23:24.362317 | orchestrator | 2026-03-13 01:23:24.362322 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-13 01:23:24.362326 | orchestrator | Friday 13 March 2026 01:22:14 +0000 (0:00:00.382) 0:00:01.432 ********** 2026-03-13 01:23:24.362331 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-13 01:23:24.362336 | orchestrator | 2026-03-13 01:23:24.362341 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-03-13 01:23:24.362345 | orchestrator | Friday 13 March 2026 01:22:15 +0000 (0:00:00.509) 0:00:01.942 ********** 2026-03-13 01:23:24.362349 | orchestrator | ok: [testbed-node-1] 2026-03-13 01:23:24.362353 | orchestrator | ok: [testbed-node-0] 2026-03-13 01:23:24.362358 | orchestrator | ok: [testbed-node-2] 2026-03-13 01:23:24.362362 | orchestrator | 2026-03-13 01:23:24.362366 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-03-13 01:23:24.362370 | orchestrator | Friday 13 March 2026 01:22:18 +0000 (0:00:03.068) 0:00:05.010 ********** 2026-03-13 01:23:24.362382 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:23:24.362387 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:23:24.362391 | orchestrator | changed: [testbed-node-0] 2026-03-13 01:23:24.362396 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-03-13 01:23:24.362400 | orchestrator | 2026-03-13 01:23:24.362404 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-13 01:23:24.362409 | orchestrator | skipping: no hosts matched 2026-03-13 01:23:24.362413 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-03-13 01:23:24.362417 | orchestrator | 2026-03-13 01:23:24.362422 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-13 01:23:24.362426 | orchestrator | skipping: no hosts matched 2026-03-13 01:23:24.362430 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-13 01:23:24.362434 | orchestrator | mariadb_bootstrap_restart 2026-03-13 01:23:24.362439 | orchestrator | 2026-03-13 01:23:24.362443 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-13 01:23:24.362447 | orchestrator | skipping: no hosts matched 2026-03-13 01:23:24.362452 | orchestrator | 2026-03-13 01:23:24.362456 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-13 01:23:24.362460 | orchestrator | 2026-03-13 01:23:24.362464 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-13 01:23:24.362469 | orchestrator | Friday 13 March 2026 01:23:23 +0000 (0:01:05.377) 0:01:10.388 ********** 2026-03-13 01:23:24.362473 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:23:24.362477 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:23:24.362481 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:23:24.362486 | orchestrator | 2026-03-13 01:23:24.362490 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-13 01:23:24.362494 | orchestrator | Friday 13 March 2026 01:23:23 +0000 (0:00:00.257) 0:01:10.646 ********** 2026-03-13 01:23:24.362499 | orchestrator | skipping: [testbed-node-0] 2026-03-13 01:23:24.362503 | orchestrator | skipping: [testbed-node-1] 2026-03-13 01:23:24.362507 | orchestrator | skipping: [testbed-node-2] 2026-03-13 01:23:24.362515 | orchestrator | 2026-03-13 01:23:24.362519 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 01:23:24.362524 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-13 01:23:24.362529 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-13 01:23:24.362534 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-13 01:23:24.362538 | orchestrator | 2026-03-13 01:23:24.362542 | orchestrator | 2026-03-13 01:23:24.362547 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 01:23:24.362551 | orchestrator | Friday 13 March 2026 01:23:24 +0000 (0:00:00.297) 0:01:10.943 ********** 2026-03-13 01:23:24.362555 | orchestrator | =============================================================================== 2026-03-13 01:23:24.362560 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 65.38s 2026-03-13 01:23:24.362572 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.07s 2026-03-13 01:23:24.362577 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.56s 2026-03-13 01:23:24.362581 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.51s 2026-03-13 01:23:24.362586 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.38s 2026-03-13 01:23:24.362590 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2026-03-13 01:23:24.362594 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.30s 2026-03-13 01:23:24.362599 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.26s 2026-03-13 01:23:24.566629 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-03-13 01:23:24.574404 | orchestrator | + set -e 2026-03-13 01:23:24.574489 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-13 01:23:24.574497 | orchestrator | ++ export INTERACTIVE=false 2026-03-13 01:23:24.574502 | orchestrator | ++ INTERACTIVE=false 2026-03-13 01:23:24.574507 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-13 01:23:24.574512 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-13 01:23:24.574522 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-13 01:23:24.575713 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-13 01:23:24.580844 | orchestrator | 2026-03-13 01:23:24.580885 | orchestrator | # OpenStack endpoints 2026-03-13 01:23:24.580891 | orchestrator | 2026-03-13 01:23:24.580895 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-13 01:23:24.580900 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-13 01:23:24.580904 | orchestrator | + export OS_CLOUD=admin 2026-03-13 01:23:24.580908 | orchestrator | + OS_CLOUD=admin 2026-03-13 01:23:24.580913 | orchestrator | + echo 2026-03-13 01:23:24.580917 | orchestrator | + echo '# OpenStack endpoints' 2026-03-13 01:23:24.580921 | orchestrator | + echo 2026-03-13 01:23:24.580925 | orchestrator | + openstack endpoint list 2026-03-13 01:23:27.577417 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-03-13 01:23:27.577468 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-03-13 01:23:27.577474 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-03-13 01:23:27.577478 | orchestrator | | 00a7393633db493b8ed69380ef331029 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-03-13 01:23:27.577496 | orchestrator | | 146a712ff569465f9e0cb2a225093f0c | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-03-13 01:23:27.577512 | orchestrator | | 1c2331192cde4aa4823fbe8dc97a0ef3 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-03-13 01:23:27.577517 | orchestrator | | 2b54bd3974be4e16be1f461e989daa73 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-03-13 01:23:27.577522 | orchestrator | | 3bdc5e2c74464ce39fd1204a03b8e7f3 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-03-13 01:23:27.577527 | orchestrator | | 3c3e52b9dcee42639d3d37f38e47dd01 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-03-13 01:23:27.577531 | orchestrator | | 4f969d0abb3749d8a8727a89ea9ac0d0 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-03-13 01:23:27.577536 | orchestrator | | 4ff6652ae0c047068e8741c90b6caf50 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-03-13 01:23:27.577541 | orchestrator | | 5c301520e8ad4ed3bc71d4a57da7b2ca | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-03-13 01:23:27.577545 | orchestrator | | 68e123dbc0314cad90d5bffa736fdf5c | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-03-13 01:23:27.577550 | orchestrator | | 85e5c3387cb041b1a58a8f6c652081f1 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-03-13 01:23:27.577555 | orchestrator | | 9ab7a6eb304d47dd9fdcf060c78fa180 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-03-13 01:23:27.577559 | orchestrator | | aacbc6fbc6804fb4be6985ebb9eb0b5e | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-03-13 01:23:27.577564 | orchestrator | | c19e20e455a34676b14bef8ded0ce7e1 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-03-13 01:23:27.577568 | orchestrator | | c4aae0525af44835b7262f2e090df58b | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-03-13 01:23:27.577573 | orchestrator | | c589b96434474200ac7f1b398d32d1a2 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-03-13 01:23:27.577578 | orchestrator | | d6ec9af04d804dea933c401b2263f215 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-03-13 01:23:27.577582 | orchestrator | | d8fca67d0cd94aa58ce38975a48deeb6 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-03-13 01:23:27.577587 | orchestrator | | ded0eb352c984edf8eab864da8387022 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-03-13 01:23:27.577591 | orchestrator | | f050ee5e3b444deb9d01cd513b8bf89d | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-03-13 01:23:27.577605 | orchestrator | | f39a210140e946949a1a23abfe778891 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-03-13 01:23:27.577610 | orchestrator | | f6f39c33fd3a4ed2bcb40d0a82cd02eb | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-03-13 01:23:27.577618 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-03-13 01:23:27.719111 | orchestrator | 2026-03-13 01:23:27.719168 | orchestrator | # Cinder 2026-03-13 01:23:27.719176 | orchestrator | 2026-03-13 01:23:27.719182 | orchestrator | + echo 2026-03-13 01:23:27.719188 | orchestrator | + echo '# Cinder' 2026-03-13 01:23:27.719194 | orchestrator | + echo 2026-03-13 01:23:27.719200 | orchestrator | + openstack volume service list 2026-03-13 01:23:30.042991 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-03-13 01:23:30.043072 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-03-13 01:23:30.043078 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-03-13 01:23:30.043082 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-03-13T01:23:29.000000 | 2026-03-13 01:23:30.043086 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-03-13T01:23:29.000000 | 2026-03-13 01:23:30.043090 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-03-13T01:23:28.000000 | 2026-03-13 01:23:30.043095 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-03-13T01:23:28.000000 | 2026-03-13 01:23:30.043098 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-03-13T01:23:20.000000 | 2026-03-13 01:23:30.043102 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-03-13T01:23:21.000000 | 2026-03-13 01:23:30.043106 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-03-13T01:23:29.000000 | 2026-03-13 01:23:30.043110 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-03-13T01:23:22.000000 | 2026-03-13 01:23:30.043113 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-03-13T01:23:23.000000 | 2026-03-13 01:23:30.043117 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-03-13 01:23:30.198594 | orchestrator | 2026-03-13 01:23:30.198663 | orchestrator | # Neutron 2026-03-13 01:23:30.198670 | orchestrator | 2026-03-13 01:23:30.198674 | orchestrator | + echo 2026-03-13 01:23:30.198679 | orchestrator | + echo '# Neutron' 2026-03-13 01:23:30.198684 | orchestrator | + echo 2026-03-13 01:23:30.198688 | orchestrator | + openstack network agent list 2026-03-13 01:23:32.727901 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-03-13 01:23:32.727999 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-03-13 01:23:32.728011 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-03-13 01:23:32.728018 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-03-13 01:23:32.728024 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-03-13 01:23:32.728032 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-03-13 01:23:32.728038 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-03-13 01:23:32.728045 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-03-13 01:23:32.728053 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-03-13 01:23:32.728076 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-03-13 01:23:32.728080 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-03-13 01:23:32.728084 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-03-13 01:23:32.728088 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-03-13 01:23:32.888345 | orchestrator | + openstack network service provider list 2026-03-13 01:23:35.121874 | orchestrator | +---------------+------+---------+ 2026-03-13 01:23:35.121931 | orchestrator | | Service Type | Name | Default | 2026-03-13 01:23:35.121941 | orchestrator | +---------------+------+---------+ 2026-03-13 01:23:35.121948 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-03-13 01:23:35.121954 | orchestrator | +---------------+------+---------+ 2026-03-13 01:23:35.279421 | orchestrator | 2026-03-13 01:23:35.279468 | orchestrator | # Nova 2026-03-13 01:23:35.279473 | orchestrator | 2026-03-13 01:23:35.279477 | orchestrator | + echo 2026-03-13 01:23:35.279481 | orchestrator | + echo '# Nova' 2026-03-13 01:23:35.279486 | orchestrator | + echo 2026-03-13 01:23:35.279490 | orchestrator | + openstack compute service list 2026-03-13 01:23:37.586616 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-03-13 01:23:37.586678 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-03-13 01:23:37.586685 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-03-13 01:23:37.586691 | orchestrator | | 23ac1936-ade0-4893-a3a9-e500384ab227 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-03-13T01:23:37.000000 | 2026-03-13 01:23:37.586710 | orchestrator | | 2e46d5d1-4583-4804-b29a-24f9884d8aff | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-03-13T01:23:36.000000 | 2026-03-13 01:23:37.586716 | orchestrator | | 0f6107c4-c718-4df5-b872-6cb08130609d | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-03-13T01:23:28.000000 | 2026-03-13 01:23:37.586722 | orchestrator | | 875fbdfb-461f-4de4-b874-ed4e12f64d13 | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-03-13T01:23:33.000000 | 2026-03-13 01:23:37.586728 | orchestrator | | 71d616fe-4119-44b6-8ce3-64e6cb243d99 | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-03-13T01:23:34.000000 | 2026-03-13 01:23:37.586776 | orchestrator | | fcc84cd4-8bdb-49b8-9cc0-d2e8d69f4483 | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-03-13T01:23:35.000000 | 2026-03-13 01:23:37.586783 | orchestrator | | 95113273-e670-43a7-b73c-4223f40167d4 | nova-compute | testbed-node-3 | nova | enabled | up | 2026-03-13T01:23:32.000000 | 2026-03-13 01:23:37.586789 | orchestrator | | a44df1e8-84d0-4d0b-961e-db7973c69622 | nova-compute | testbed-node-4 | nova | enabled | up | 2026-03-13T01:23:33.000000 | 2026-03-13 01:23:37.586794 | orchestrator | | 5d74028a-3c16-4ae6-a460-e8a83f03f4f6 | nova-compute | testbed-node-5 | nova | enabled | up | 2026-03-13T01:23:33.000000 | 2026-03-13 01:23:37.586800 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-03-13 01:23:37.736807 | orchestrator | + openstack hypervisor list 2026-03-13 01:23:40.459775 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-03-13 01:23:40.459827 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-03-13 01:23:40.459832 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-03-13 01:23:40.459848 | orchestrator | | 4f420407-2a7a-4340-aa41-607dde3fe806 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-03-13 01:23:40.459851 | orchestrator | | 58988beb-5fee-43db-a709-468e64adeb95 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-03-13 01:23:40.459854 | orchestrator | | b09706ab-93a8-4efd-9ff7-3a3bb6664d20 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-03-13 01:23:40.459857 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-03-13 01:23:40.704329 | orchestrator | 2026-03-13 01:23:40.704389 | orchestrator | # Run OpenStack test play 2026-03-13 01:23:40.704399 | orchestrator | 2026-03-13 01:23:40.704406 | orchestrator | + echo 2026-03-13 01:23:40.704412 | orchestrator | + echo '# Run OpenStack test play' 2026-03-13 01:23:40.704419 | orchestrator | + echo 2026-03-13 01:23:40.704426 | orchestrator | + osism apply --environment openstack test 2026-03-13 01:23:42.674644 | orchestrator | 2026-03-13 01:23:42 | INFO  | Trying to run play test in environment openstack 2026-03-13 01:23:52.701129 | orchestrator | 2026-03-13 01:23:52 | INFO  | Prepare task for execution of test. 2026-03-13 01:23:52.777398 | orchestrator | 2026-03-13 01:23:52 | INFO  | Task 1671ca0e-8e3d-47cf-918e-407ef6a37384 (test) was prepared for execution. 2026-03-13 01:23:52.777494 | orchestrator | 2026-03-13 01:23:52 | INFO  | It takes a moment until task 1671ca0e-8e3d-47cf-918e-407ef6a37384 (test) has been started and output is visible here. 2026-03-13 01:26:21.507629 | orchestrator | 2026-03-13 01:26:21.507690 | orchestrator | PLAY [Create test project] ***************************************************** 2026-03-13 01:26:21.507701 | orchestrator | 2026-03-13 01:26:21.507707 | orchestrator | TASK [Create test domain] ****************************************************** 2026-03-13 01:26:21.507713 | orchestrator | Friday 13 March 2026 01:23:56 +0000 (0:00:00.069) 0:00:00.069 ********** 2026-03-13 01:26:21.507720 | orchestrator | changed: [localhost] 2026-03-13 01:26:21.507726 | orchestrator | 2026-03-13 01:26:21.507732 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-03-13 01:26:21.507790 | orchestrator | Friday 13 March 2026 01:24:00 +0000 (0:00:03.504) 0:00:03.574 ********** 2026-03-13 01:26:21.507798 | orchestrator | changed: [localhost] 2026-03-13 01:26:21.507804 | orchestrator | 2026-03-13 01:26:21.507810 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-03-13 01:26:21.507816 | orchestrator | Friday 13 March 2026 01:24:04 +0000 (0:00:04.132) 0:00:07.706 ********** 2026-03-13 01:26:21.507822 | orchestrator | changed: [localhost] 2026-03-13 01:26:21.507834 | orchestrator | 2026-03-13 01:26:21.507840 | orchestrator | TASK [Create test project] ***************************************************** 2026-03-13 01:26:21.507846 | orchestrator | Friday 13 March 2026 01:24:10 +0000 (0:00:06.317) 0:00:14.024 ********** 2026-03-13 01:26:21.507853 | orchestrator | changed: [localhost] 2026-03-13 01:26:21.507859 | orchestrator | 2026-03-13 01:26:21.507865 | orchestrator | TASK [Create test user] ******************************************************** 2026-03-13 01:26:21.507871 | orchestrator | Friday 13 March 2026 01:24:14 +0000 (0:00:03.565) 0:00:17.590 ********** 2026-03-13 01:26:21.507878 | orchestrator | changed: [localhost] 2026-03-13 01:26:21.507882 | orchestrator | 2026-03-13 01:26:21.507893 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-03-13 01:26:21.507897 | orchestrator | Friday 13 March 2026 01:24:18 +0000 (0:00:03.727) 0:00:21.318 ********** 2026-03-13 01:26:21.507906 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-03-13 01:26:21.507911 | orchestrator | changed: [localhost] => (item=member) 2026-03-13 01:26:21.507915 | orchestrator | changed: [localhost] => (item=creator) 2026-03-13 01:26:21.507919 | orchestrator | 2026-03-13 01:26:21.507923 | orchestrator | TASK [Create test server group] ************************************************ 2026-03-13 01:26:21.507936 | orchestrator | Friday 13 March 2026 01:24:28 +0000 (0:00:10.857) 0:00:32.175 ********** 2026-03-13 01:26:21.507940 | orchestrator | changed: [localhost] 2026-03-13 01:26:21.507944 | orchestrator | 2026-03-13 01:26:21.507948 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-03-13 01:26:21.507965 | orchestrator | Friday 13 March 2026 01:24:33 +0000 (0:00:04.647) 0:00:36.823 ********** 2026-03-13 01:26:21.507970 | orchestrator | changed: [localhost] 2026-03-13 01:26:21.507975 | orchestrator | 2026-03-13 01:26:21.507982 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-03-13 01:26:21.507992 | orchestrator | Friday 13 March 2026 01:24:38 +0000 (0:00:05.090) 0:00:41.914 ********** 2026-03-13 01:26:21.507998 | orchestrator | changed: [localhost] 2026-03-13 01:26:21.508004 | orchestrator | 2026-03-13 01:26:21.508010 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-03-13 01:26:21.508015 | orchestrator | Friday 13 March 2026 01:24:42 +0000 (0:00:04.121) 0:00:46.035 ********** 2026-03-13 01:26:21.508021 | orchestrator | changed: [localhost] 2026-03-13 01:26:21.508026 | orchestrator | 2026-03-13 01:26:21.508032 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-03-13 01:26:21.508039 | orchestrator | Friday 13 March 2026 01:24:46 +0000 (0:00:04.034) 0:00:50.069 ********** 2026-03-13 01:26:21.508045 | orchestrator | changed: [localhost] 2026-03-13 01:26:21.508051 | orchestrator | 2026-03-13 01:26:21.508057 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-03-13 01:26:21.508063 | orchestrator | Friday 13 March 2026 01:24:50 +0000 (0:00:03.751) 0:00:53.821 ********** 2026-03-13 01:26:21.508070 | orchestrator | changed: [localhost] 2026-03-13 01:26:21.508076 | orchestrator | 2026-03-13 01:26:21.508082 | orchestrator | TASK [Create test network] ***************************************************** 2026-03-13 01:26:21.508088 | orchestrator | Friday 13 March 2026 01:24:54 +0000 (0:00:03.719) 0:00:57.540 ********** 2026-03-13 01:26:21.508094 | orchestrator | changed: [localhost] 2026-03-13 01:26:21.508102 | orchestrator | 2026-03-13 01:26:21.508106 | orchestrator | TASK [Create test subnet] ****************************************************** 2026-03-13 01:26:21.508109 | orchestrator | Friday 13 March 2026 01:24:58 +0000 (0:00:04.537) 0:01:02.078 ********** 2026-03-13 01:26:21.508113 | orchestrator | changed: [localhost] 2026-03-13 01:26:21.508117 | orchestrator | 2026-03-13 01:26:21.508121 | orchestrator | TASK [Create test router] ****************************************************** 2026-03-13 01:26:21.508125 | orchestrator | Friday 13 March 2026 01:25:03 +0000 (0:00:04.742) 0:01:06.820 ********** 2026-03-13 01:26:21.508129 | orchestrator | changed: [localhost] 2026-03-13 01:26:21.508133 | orchestrator | 2026-03-13 01:26:21.508136 | orchestrator | PLAY [Manage test instances and volumes] *************************************** 2026-03-13 01:26:21.508142 | orchestrator | 2026-03-13 01:26:21.508148 | orchestrator | TASK [Get test server group] *************************************************** 2026-03-13 01:26:21.508154 | orchestrator | Friday 13 March 2026 01:25:12 +0000 (0:00:09.344) 0:01:16.164 ********** 2026-03-13 01:26:21.508160 | orchestrator | ok: [localhost] 2026-03-13 01:26:21.508166 | orchestrator | 2026-03-13 01:26:21.508172 | orchestrator | TASK [Detach test volume] ****************************************************** 2026-03-13 01:26:21.508178 | orchestrator | Friday 13 March 2026 01:25:16 +0000 (0:00:03.671) 0:01:19.836 ********** 2026-03-13 01:26:21.508183 | orchestrator | skipping: [localhost] 2026-03-13 01:26:21.508189 | orchestrator | 2026-03-13 01:26:21.508195 | orchestrator | TASK [Delete test volume] ****************************************************** 2026-03-13 01:26:21.508201 | orchestrator | Friday 13 March 2026 01:25:16 +0000 (0:00:00.048) 0:01:19.884 ********** 2026-03-13 01:26:21.508207 | orchestrator | skipping: [localhost] 2026-03-13 01:26:21.508214 | orchestrator | 2026-03-13 01:26:21.508224 | orchestrator | TASK [Delete test instances] *************************************************** 2026-03-13 01:26:21.508231 | orchestrator | Friday 13 March 2026 01:25:16 +0000 (0:00:00.042) 0:01:19.927 ********** 2026-03-13 01:26:21.508237 | orchestrator | skipping: [localhost] => (item=test-4)  2026-03-13 01:26:21.508243 | orchestrator | skipping: [localhost] => (item=test-3)  2026-03-13 01:26:21.508263 | orchestrator | skipping: [localhost] => (item=test-2)  2026-03-13 01:26:21.508270 | orchestrator | skipping: [localhost] => (item=test-1)  2026-03-13 01:26:21.508277 | orchestrator | skipping: [localhost] => (item=test)  2026-03-13 01:26:21.508291 | orchestrator | skipping: [localhost] 2026-03-13 01:26:21.508297 | orchestrator | 2026-03-13 01:26:21.508303 | orchestrator | TASK [Wait for instance deletion to complete] ********************************** 2026-03-13 01:26:21.508309 | orchestrator | Friday 13 March 2026 01:25:16 +0000 (0:00:00.154) 0:01:20.081 ********** 2026-03-13 01:26:21.508315 | orchestrator | skipping: [localhost] 2026-03-13 01:26:21.508322 | orchestrator | 2026-03-13 01:26:21.508328 | orchestrator | TASK [Create test instances] *************************************************** 2026-03-13 01:26:21.508335 | orchestrator | Friday 13 March 2026 01:25:17 +0000 (0:00:00.146) 0:01:20.228 ********** 2026-03-13 01:26:21.508342 | orchestrator | changed: [localhost] => (item=test) 2026-03-13 01:26:21.508348 | orchestrator | changed: [localhost] => (item=test-1) 2026-03-13 01:26:21.508354 | orchestrator | changed: [localhost] => (item=test-2) 2026-03-13 01:26:21.508361 | orchestrator | changed: [localhost] => (item=test-3) 2026-03-13 01:26:21.508368 | orchestrator | changed: [localhost] => (item=test-4) 2026-03-13 01:26:21.508375 | orchestrator | 2026-03-13 01:26:21.508381 | orchestrator | TASK [Wait for instance creation to complete] ********************************** 2026-03-13 01:26:21.508386 | orchestrator | Friday 13 March 2026 01:25:21 +0000 (0:00:04.651) 0:01:24.879 ********** 2026-03-13 01:26:21.508390 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-03-13 01:26:21.508395 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (59 retries left). 2026-03-13 01:26:21.508399 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (58 retries left). 2026-03-13 01:26:21.508403 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (57 retries left). 2026-03-13 01:26:21.508413 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j917243064401.2643', 'results_file': '/ansible/.ansible_async/j917243064401.2643', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-03-13 01:26:21.508420 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j40903137193.2668', 'results_file': '/ansible/.ansible_async/j40903137193.2668', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-03-13 01:26:21.508424 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j81348012812.2693', 'results_file': '/ansible/.ansible_async/j81348012812.2693', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-03-13 01:26:21.508428 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j157356853559.2718', 'results_file': '/ansible/.ansible_async/j157356853559.2718', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-03-13 01:26:21.508433 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j337636523002.2743', 'results_file': '/ansible/.ansible_async/j337636523002.2743', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-03-13 01:26:21.508437 | orchestrator | 2026-03-13 01:26:21.508441 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-03-13 01:26:21.508445 | orchestrator | Friday 13 March 2026 01:26:08 +0000 (0:00:46.770) 0:02:11.650 ********** 2026-03-13 01:26:21.508450 | orchestrator | changed: [localhost] => (item=test) 2026-03-13 01:26:21.508455 | orchestrator | changed: [localhost] => (item=test-1) 2026-03-13 01:26:21.508459 | orchestrator | changed: [localhost] => (item=test-2) 2026-03-13 01:26:21.508463 | orchestrator | changed: [localhost] => (item=test-3) 2026-03-13 01:26:21.508467 | orchestrator | changed: [localhost] => (item=test-4) 2026-03-13 01:26:21.508471 | orchestrator | 2026-03-13 01:26:21.508476 | orchestrator | TASK [Wait for metadata to be added] ******************************************* 2026-03-13 01:26:21.508480 | orchestrator | Friday 13 March 2026 01:26:12 +0000 (0:00:04.452) 0:02:16.102 ********** 2026-03-13 01:26:21.508484 | orchestrator | FAILED - RETRYING: [localhost]: Wait for metadata to be added (30 retries left). 2026-03-13 01:26:21.508493 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j560374145943.2848', 'results_file': '/ansible/.ansible_async/j560374145943.2848', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-03-13 01:26:21.508499 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j499315334368.2873', 'results_file': '/ansible/.ansible_async/j499315334368.2873', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-03-13 01:26:21.508505 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j291769436056.2898', 'results_file': '/ansible/.ansible_async/j291769436056.2898', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-03-13 01:26:21.508521 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j565890282366.2923', 'results_file': '/ansible/.ansible_async/j565890282366.2923', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-03-13 01:27:00.344820 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j720980300937.2948', 'results_file': '/ansible/.ansible_async/j720980300937.2948', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-03-13 01:27:00.344897 | orchestrator | 2026-03-13 01:27:00.344909 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-03-13 01:27:00.344916 | orchestrator | Friday 13 March 2026 01:26:22 +0000 (0:00:09.260) 0:02:25.363 ********** 2026-03-13 01:27:00.344923 | orchestrator | changed: [localhost] => (item=test) 2026-03-13 01:27:00.344931 | orchestrator | changed: [localhost] => (item=test-1) 2026-03-13 01:27:00.344938 | orchestrator | changed: [localhost] => (item=test-2) 2026-03-13 01:27:00.344946 | orchestrator | changed: [localhost] => (item=test-3) 2026-03-13 01:27:00.344953 | orchestrator | changed: [localhost] => (item=test-4) 2026-03-13 01:27:00.344959 | orchestrator | 2026-03-13 01:27:00.344966 | orchestrator | TASK [Wait for tags to be added] *********************************************** 2026-03-13 01:27:00.344972 | orchestrator | Friday 13 March 2026 01:26:26 +0000 (0:00:03.858) 0:02:29.222 ********** 2026-03-13 01:27:00.344979 | orchestrator | FAILED - RETRYING: [localhost]: Wait for tags to be added (30 retries left). 2026-03-13 01:27:00.344987 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j363314647472.3017', 'results_file': '/ansible/.ansible_async/j363314647472.3017', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-03-13 01:27:00.345006 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j892816790373.3042', 'results_file': '/ansible/.ansible_async/j892816790373.3042', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-03-13 01:27:00.345014 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j339926928133.3068', 'results_file': '/ansible/.ansible_async/j339926928133.3068', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-03-13 01:27:00.345021 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j193155371449.3094', 'results_file': '/ansible/.ansible_async/j193155371449.3094', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-03-13 01:27:00.345028 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j405543119001.3120', 'results_file': '/ansible/.ansible_async/j405543119001.3120', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-03-13 01:27:00.345035 | orchestrator | 2026-03-13 01:27:00.345051 | orchestrator | TASK [Create test volume] ****************************************************** 2026-03-13 01:27:00.345058 | orchestrator | Friday 13 March 2026 01:26:35 +0000 (0:00:09.776) 0:02:38.998 ********** 2026-03-13 01:27:00.345079 | orchestrator | changed: [localhost] 2026-03-13 01:27:00.345086 | orchestrator | 2026-03-13 01:27:00.345093 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-03-13 01:27:00.345100 | orchestrator | Friday 13 March 2026 01:26:42 +0000 (0:00:06.535) 0:02:45.534 ********** 2026-03-13 01:27:00.345107 | orchestrator | changed: [localhost] 2026-03-13 01:27:00.345114 | orchestrator | 2026-03-13 01:27:00.345121 | orchestrator | TASK [Create floating ip address] ********************************************** 2026-03-13 01:27:00.345127 | orchestrator | Friday 13 March 2026 01:26:55 +0000 (0:00:13.171) 0:02:58.705 ********** 2026-03-13 01:27:00.345134 | orchestrator | ok: [localhost] 2026-03-13 01:27:00.345142 | orchestrator | 2026-03-13 01:27:00.345149 | orchestrator | TASK [Print floating ip address] *********************************************** 2026-03-13 01:27:00.345155 | orchestrator | Friday 13 March 2026 01:27:00 +0000 (0:00:04.562) 0:03:03.268 ********** 2026-03-13 01:27:00.345163 | orchestrator | ok: [localhost] => { 2026-03-13 01:27:00.345170 | orchestrator |  "msg": "192.168.112.108" 2026-03-13 01:27:00.345178 | orchestrator | } 2026-03-13 01:27:00.345185 | orchestrator | 2026-03-13 01:27:00.345192 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 01:27:00.345199 | orchestrator | localhost : ok=26  changed=23  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-13 01:27:00.345206 | orchestrator | 2026-03-13 01:27:00.345213 | orchestrator | 2026-03-13 01:27:00.345229 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 01:27:00.345236 | orchestrator | Friday 13 March 2026 01:27:00 +0000 (0:00:00.037) 0:03:03.305 ********** 2026-03-13 01:27:00.345248 | orchestrator | =============================================================================== 2026-03-13 01:27:00.345255 | orchestrator | Wait for instance creation to complete --------------------------------- 46.77s 2026-03-13 01:27:00.345261 | orchestrator | Attach test volume ----------------------------------------------------- 13.17s 2026-03-13 01:27:00.345268 | orchestrator | Add member roles to user test ------------------------------------------ 10.86s 2026-03-13 01:27:00.345274 | orchestrator | Wait for tags to be added ----------------------------------------------- 9.78s 2026-03-13 01:27:00.345280 | orchestrator | Create test router ------------------------------------------------------ 9.34s 2026-03-13 01:27:00.345287 | orchestrator | Wait for metadata to be added ------------------------------------------- 9.26s 2026-03-13 01:27:00.345293 | orchestrator | Create test volume ------------------------------------------------------ 6.54s 2026-03-13 01:27:00.345311 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.32s 2026-03-13 01:27:00.345318 | orchestrator | Create ssh security group ----------------------------------------------- 5.09s 2026-03-13 01:27:00.345324 | orchestrator | Create test subnet ------------------------------------------------------ 4.74s 2026-03-13 01:27:00.345330 | orchestrator | Create test instances --------------------------------------------------- 4.65s 2026-03-13 01:27:00.345336 | orchestrator | Create test server group ------------------------------------------------ 4.65s 2026-03-13 01:27:00.345342 | orchestrator | Create floating ip address ---------------------------------------------- 4.56s 2026-03-13 01:27:00.345349 | orchestrator | Create test network ----------------------------------------------------- 4.54s 2026-03-13 01:27:00.345355 | orchestrator | Add metadata to instances ----------------------------------------------- 4.45s 2026-03-13 01:27:00.345361 | orchestrator | Create test-admin user -------------------------------------------------- 4.13s 2026-03-13 01:27:00.345368 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.12s 2026-03-13 01:27:00.345374 | orchestrator | Create icmp security group ---------------------------------------------- 4.03s 2026-03-13 01:27:00.345380 | orchestrator | Add tag to instances ---------------------------------------------------- 3.86s 2026-03-13 01:27:00.345387 | orchestrator | Add rule to icmp security group ----------------------------------------- 3.75s 2026-03-13 01:27:00.667780 | orchestrator | + server_list 2026-03-13 01:27:00.667833 | orchestrator | + openstack --os-cloud test server list 2026-03-13 01:27:04.538531 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-03-13 01:27:04.538605 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-03-13 01:27:04.538611 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-03-13 01:27:04.538616 | orchestrator | | 140b68c4-9b8a-4f3f-9230-5cbf7c98134a | test-4 | ACTIVE | test=192.168.112.162, 192.168.200.118 | N/A (booted from volume) | SCS-1L-1 | 2026-03-13 01:27:04.538636 | orchestrator | | 1bf3b533-6c02-4133-8c7d-8c36e59e3fdd | test-3 | ACTIVE | test=192.168.112.179, 192.168.200.253 | N/A (booted from volume) | SCS-1L-1 | 2026-03-13 01:27:04.538641 | orchestrator | | 5db10de1-c950-4c0c-a93f-3bb6e055c017 | test-1 | ACTIVE | test=192.168.112.133, 192.168.200.3 | N/A (booted from volume) | SCS-1L-1 | 2026-03-13 01:27:04.538645 | orchestrator | | d2ae0d1f-6a2d-46e5-97f4-b12d225e89bf | test-2 | ACTIVE | test=192.168.112.112, 192.168.200.190 | N/A (booted from volume) | SCS-1L-1 | 2026-03-13 01:27:04.538649 | orchestrator | | 27168077-d802-4639-bb7f-7e68b59b2281 | test | ACTIVE | test=192.168.112.108, 192.168.200.206 | N/A (booted from volume) | SCS-1L-1 | 2026-03-13 01:27:04.538653 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-03-13 01:27:04.778785 | orchestrator | + openstack --os-cloud test server show test 2026-03-13 01:27:08.048083 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-13 01:27:08.049385 | orchestrator | | Field | Value | 2026-03-13 01:27:08.049440 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-13 01:27:08.049446 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-13 01:27:08.049451 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-13 01:27:08.049455 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-13 01:27:08.049473 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-03-13 01:27:08.049482 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-13 01:27:08.049486 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-13 01:27:08.049507 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-13 01:27:08.049512 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-13 01:27:08.049519 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-13 01:27:08.049525 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-13 01:27:08.049531 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-13 01:27:08.049537 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-13 01:27:08.049548 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-13 01:27:08.049554 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-13 01:27:08.049567 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-13 01:27:08.049574 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-13T01:25:51.000000 | 2026-03-13 01:27:08.049586 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-13 01:27:08.049594 | orchestrator | | accessIPv4 | | 2026-03-13 01:27:08.049598 | orchestrator | | accessIPv6 | | 2026-03-13 01:27:08.049602 | orchestrator | | addresses | test=192.168.112.108, 192.168.200.206 | 2026-03-13 01:27:08.049606 | orchestrator | | config_drive | | 2026-03-13 01:27:08.049610 | orchestrator | | created | 2026-03-13T01:25:25Z | 2026-03-13 01:27:08.049618 | orchestrator | | description | None | 2026-03-13 01:27:08.049622 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-13 01:27:08.049628 | orchestrator | | hostId | 548ee2bebdea8e9b41c75b615bdff4abaad95e8390f635863920170c | 2026-03-13 01:27:08.049633 | orchestrator | | host_status | None | 2026-03-13 01:27:08.049642 | orchestrator | | id | 27168077-d802-4639-bb7f-7e68b59b2281 | 2026-03-13 01:27:08.049646 | orchestrator | | image | N/A (booted from volume) | 2026-03-13 01:27:08.049650 | orchestrator | | key_name | test | 2026-03-13 01:27:08.049654 | orchestrator | | locked | False | 2026-03-13 01:27:08.049658 | orchestrator | | locked_reason | None | 2026-03-13 01:27:08.049665 | orchestrator | | name | test | 2026-03-13 01:27:08.049669 | orchestrator | | pinned_availability_zone | None | 2026-03-13 01:27:08.049673 | orchestrator | | progress | 0 | 2026-03-13 01:27:08.049679 | orchestrator | | project_id | 8cebd5b89c924bd1aa0e16975392adbb | 2026-03-13 01:27:08.049683 | orchestrator | | properties | hostname='test' | 2026-03-13 01:27:08.049692 | orchestrator | | security_groups | name='icmp' | 2026-03-13 01:27:08.049696 | orchestrator | | | name='ssh' | 2026-03-13 01:27:08.049700 | orchestrator | | server_groups | None | 2026-03-13 01:27:08.049704 | orchestrator | | status | ACTIVE | 2026-03-13 01:27:08.049711 | orchestrator | | tags | test | 2026-03-13 01:27:08.049715 | orchestrator | | trusted_image_certificates | None | 2026-03-13 01:27:08.049719 | orchestrator | | updated | 2026-03-13T01:26:14Z | 2026-03-13 01:27:08.049723 | orchestrator | | user_id | ee44de2b56e44b87a3b7dfc4987a84c5 | 2026-03-13 01:27:08.049807 | orchestrator | | volumes_attached | delete_on_termination='True', id='2e666486-d2dc-4356-8f53-32784d1c7c80' | 2026-03-13 01:27:08.049815 | orchestrator | | | delete_on_termination='False', id='8d1ba26d-e4c0-4663-a73f-890d9844abbe' | 2026-03-13 01:27:08.050994 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-13 01:27:08.292102 | orchestrator | + openstack --os-cloud test server show test-1 2026-03-13 01:27:11.161427 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-13 01:27:11.161492 | orchestrator | | Field | Value | 2026-03-13 01:27:11.161515 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-13 01:27:11.161565 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-13 01:27:11.161572 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-13 01:27:11.161576 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-13 01:27:11.161580 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-03-13 01:27:11.161588 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-13 01:27:11.161592 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-13 01:27:11.161608 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-13 01:27:11.161612 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-13 01:27:11.161626 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-13 01:27:11.161630 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-13 01:27:11.161634 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-13 01:27:11.161638 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-13 01:27:11.161642 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-13 01:27:11.161648 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-13 01:27:11.161652 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-13 01:27:11.161656 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-13T01:25:51.000000 | 2026-03-13 01:27:11.161664 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-13 01:27:11.161668 | orchestrator | | accessIPv4 | | 2026-03-13 01:27:11.161675 | orchestrator | | accessIPv6 | | 2026-03-13 01:27:11.161679 | orchestrator | | addresses | test=192.168.112.133, 192.168.200.3 | 2026-03-13 01:27:11.161683 | orchestrator | | config_drive | | 2026-03-13 01:27:11.161687 | orchestrator | | created | 2026-03-13T01:25:26Z | 2026-03-13 01:27:11.161691 | orchestrator | | description | None | 2026-03-13 01:27:11.161697 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-13 01:27:11.161701 | orchestrator | | hostId | 548ee2bebdea8e9b41c75b615bdff4abaad95e8390f635863920170c | 2026-03-13 01:27:11.161705 | orchestrator | | host_status | None | 2026-03-13 01:27:11.161713 | orchestrator | | id | 5db10de1-c950-4c0c-a93f-3bb6e055c017 | 2026-03-13 01:27:11.161721 | orchestrator | | image | N/A (booted from volume) | 2026-03-13 01:27:11.161725 | orchestrator | | key_name | test | 2026-03-13 01:27:11.161870 | orchestrator | | locked | False | 2026-03-13 01:27:11.161885 | orchestrator | | locked_reason | None | 2026-03-13 01:27:11.161891 | orchestrator | | name | test-1 | 2026-03-13 01:27:11.161897 | orchestrator | | pinned_availability_zone | None | 2026-03-13 01:27:11.161908 | orchestrator | | progress | 0 | 2026-03-13 01:27:11.161915 | orchestrator | | project_id | 8cebd5b89c924bd1aa0e16975392adbb | 2026-03-13 01:27:11.161921 | orchestrator | | properties | hostname='test-1' | 2026-03-13 01:27:11.161941 | orchestrator | | security_groups | name='icmp' | 2026-03-13 01:27:11.161948 | orchestrator | | | name='ssh' | 2026-03-13 01:27:11.161954 | orchestrator | | server_groups | None | 2026-03-13 01:27:11.161961 | orchestrator | | status | ACTIVE | 2026-03-13 01:27:11.161968 | orchestrator | | tags | test | 2026-03-13 01:27:11.161976 | orchestrator | | trusted_image_certificates | None | 2026-03-13 01:27:11.161982 | orchestrator | | updated | 2026-03-13T01:26:14Z | 2026-03-13 01:27:11.161990 | orchestrator | | user_id | ee44de2b56e44b87a3b7dfc4987a84c5 | 2026-03-13 01:27:11.161995 | orchestrator | | volumes_attached | delete_on_termination='True', id='ded3d2ed-d328-4768-85aa-cba2b81acbf3' | 2026-03-13 01:27:11.165375 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-13 01:27:11.385861 | orchestrator | + openstack --os-cloud test server show test-2 2026-03-13 01:27:14.341359 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-13 01:27:14.341454 | orchestrator | | Field | Value | 2026-03-13 01:27:14.341463 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-13 01:27:14.341470 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-13 01:27:14.341478 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-13 01:27:14.341487 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-13 01:27:14.341495 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-03-13 01:27:14.341502 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-13 01:27:14.341528 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-13 01:27:14.341549 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-13 01:27:14.341590 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-13 01:27:14.341598 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-13 01:27:14.341605 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-13 01:27:14.341612 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-13 01:27:14.341618 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-13 01:27:14.341625 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-13 01:27:14.341632 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-13 01:27:14.341643 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-13 01:27:14.341652 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-13T01:25:51.000000 | 2026-03-13 01:27:14.341662 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-13 01:27:14.341669 | orchestrator | | accessIPv4 | | 2026-03-13 01:27:14.341675 | orchestrator | | accessIPv6 | | 2026-03-13 01:27:14.341681 | orchestrator | | addresses | test=192.168.112.112, 192.168.200.190 | 2026-03-13 01:27:14.341687 | orchestrator | | config_drive | | 2026-03-13 01:27:14.341693 | orchestrator | | created | 2026-03-13T01:25:26Z | 2026-03-13 01:27:14.341700 | orchestrator | | description | None | 2026-03-13 01:27:14.341706 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-13 01:27:14.341720 | orchestrator | | hostId | 5b67b44461d2f6f9c436d6e6b4bd55bc22c6ba25e59323634c5776e3 | 2026-03-13 01:27:14.341742 | orchestrator | | host_status | None | 2026-03-13 01:27:14.341753 | orchestrator | | id | d2ae0d1f-6a2d-46e5-97f4-b12d225e89bf | 2026-03-13 01:27:14.341759 | orchestrator | | image | N/A (booted from volume) | 2026-03-13 01:27:14.341764 | orchestrator | | key_name | test | 2026-03-13 01:27:14.341769 | orchestrator | | locked | False | 2026-03-13 01:27:14.341775 | orchestrator | | locked_reason | None | 2026-03-13 01:27:14.341781 | orchestrator | | name | test-2 | 2026-03-13 01:27:14.341788 | orchestrator | | pinned_availability_zone | None | 2026-03-13 01:27:14.341796 | orchestrator | | progress | 0 | 2026-03-13 01:27:14.341802 | orchestrator | | project_id | 8cebd5b89c924bd1aa0e16975392adbb | 2026-03-13 01:27:14.341806 | orchestrator | | properties | hostname='test-2' | 2026-03-13 01:27:14.341814 | orchestrator | | security_groups | name='icmp' | 2026-03-13 01:27:14.341818 | orchestrator | | | name='ssh' | 2026-03-13 01:27:14.341822 | orchestrator | | server_groups | None | 2026-03-13 01:27:14.341826 | orchestrator | | status | ACTIVE | 2026-03-13 01:27:14.341830 | orchestrator | | tags | test | 2026-03-13 01:27:14.341834 | orchestrator | | trusted_image_certificates | None | 2026-03-13 01:27:14.341841 | orchestrator | | updated | 2026-03-13T01:26:15Z | 2026-03-13 01:27:14.341848 | orchestrator | | user_id | ee44de2b56e44b87a3b7dfc4987a84c5 | 2026-03-13 01:27:14.341852 | orchestrator | | volumes_attached | delete_on_termination='True', id='c43c1913-0d26-4a8b-af6b-ff1e93bcf8e7' | 2026-03-13 01:27:14.344916 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-13 01:27:14.572931 | orchestrator | + openstack --os-cloud test server show test-3 2026-03-13 01:27:17.249081 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-13 01:27:17.249140 | orchestrator | | Field | Value | 2026-03-13 01:27:17.249151 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-13 01:27:17.249158 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-13 01:27:17.249163 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-13 01:27:17.249181 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-13 01:27:17.249186 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-03-13 01:27:17.249198 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-13 01:27:17.249203 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-13 01:27:17.249215 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-13 01:27:17.249219 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-13 01:27:17.249223 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-13 01:27:17.249227 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-13 01:27:17.249231 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-13 01:27:17.249235 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-13 01:27:17.249242 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-13 01:27:17.249245 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-13 01:27:17.249251 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-13 01:27:17.249255 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-13T01:25:51.000000 | 2026-03-13 01:27:17.249262 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-13 01:27:17.249266 | orchestrator | | accessIPv4 | | 2026-03-13 01:27:17.249270 | orchestrator | | accessIPv6 | | 2026-03-13 01:27:17.249274 | orchestrator | | addresses | test=192.168.112.179, 192.168.200.253 | 2026-03-13 01:27:17.249277 | orchestrator | | config_drive | | 2026-03-13 01:27:17.249284 | orchestrator | | created | 2026-03-13T01:25:27Z | 2026-03-13 01:27:17.249288 | orchestrator | | description | None | 2026-03-13 01:27:17.249292 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-13 01:27:17.249296 | orchestrator | | hostId | 5b67b44461d2f6f9c436d6e6b4bd55bc22c6ba25e59323634c5776e3 | 2026-03-13 01:27:17.249458 | orchestrator | | host_status | None | 2026-03-13 01:27:17.249467 | orchestrator | | id | 1bf3b533-6c02-4133-8c7d-8c36e59e3fdd | 2026-03-13 01:27:17.249471 | orchestrator | | image | N/A (booted from volume) | 2026-03-13 01:27:17.249475 | orchestrator | | key_name | test | 2026-03-13 01:27:17.249479 | orchestrator | | locked | False | 2026-03-13 01:27:17.249486 | orchestrator | | locked_reason | None | 2026-03-13 01:27:17.249491 | orchestrator | | name | test-3 | 2026-03-13 01:27:17.249498 | orchestrator | | pinned_availability_zone | None | 2026-03-13 01:27:17.249504 | orchestrator | | progress | 0 | 2026-03-13 01:27:17.249510 | orchestrator | | project_id | 8cebd5b89c924bd1aa0e16975392adbb | 2026-03-13 01:27:17.249517 | orchestrator | | properties | hostname='test-3' | 2026-03-13 01:27:17.249527 | orchestrator | | security_groups | name='icmp' | 2026-03-13 01:27:17.249533 | orchestrator | | | name='ssh' | 2026-03-13 01:27:17.249540 | orchestrator | | server_groups | None | 2026-03-13 01:27:17.249551 | orchestrator | | status | ACTIVE | 2026-03-13 01:27:17.249558 | orchestrator | | tags | test | 2026-03-13 01:27:17.249565 | orchestrator | | trusted_image_certificates | None | 2026-03-13 01:27:17.249570 | orchestrator | | updated | 2026-03-13T01:26:16Z | 2026-03-13 01:27:17.249574 | orchestrator | | user_id | ee44de2b56e44b87a3b7dfc4987a84c5 | 2026-03-13 01:27:17.249577 | orchestrator | | volumes_attached | delete_on_termination='True', id='e4328123-14b3-4f00-a715-fbc1789f8994' | 2026-03-13 01:27:17.254151 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-13 01:27:17.473422 | orchestrator | + openstack --os-cloud test server show test-4 2026-03-13 01:27:20.286526 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-13 01:27:20.286600 | orchestrator | | Field | Value | 2026-03-13 01:27:20.286607 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-13 01:27:20.286629 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-13 01:27:20.286634 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-13 01:27:20.286650 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-13 01:27:20.286654 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-03-13 01:27:20.286658 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-13 01:27:20.286663 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-13 01:27:20.286679 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-13 01:27:20.286683 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-13 01:27:20.286687 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-13 01:27:20.286700 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-13 01:27:20.286704 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-13 01:27:20.286708 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-13 01:27:20.286716 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-13 01:27:20.286720 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-13 01:27:20.286724 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-13 01:27:20.286774 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-13T01:25:52.000000 | 2026-03-13 01:27:20.286783 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-13 01:27:20.286787 | orchestrator | | accessIPv4 | | 2026-03-13 01:27:20.286795 | orchestrator | | accessIPv6 | | 2026-03-13 01:27:20.286799 | orchestrator | | addresses | test=192.168.112.162, 192.168.200.118 | 2026-03-13 01:27:20.286803 | orchestrator | | config_drive | | 2026-03-13 01:27:20.286807 | orchestrator | | created | 2026-03-13T01:25:28Z | 2026-03-13 01:27:20.286814 | orchestrator | | description | None | 2026-03-13 01:27:20.286817 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-13 01:27:20.286821 | orchestrator | | hostId | 821868d67cc43368dca97a6e05d3981eeead9749b405ae5021c6d5c3 | 2026-03-13 01:27:20.286825 | orchestrator | | host_status | None | 2026-03-13 01:27:20.286832 | orchestrator | | id | 140b68c4-9b8a-4f3f-9230-5cbf7c98134a | 2026-03-13 01:27:20.286840 | orchestrator | | image | N/A (booted from volume) | 2026-03-13 01:27:20.286844 | orchestrator | | key_name | test | 2026-03-13 01:27:20.286848 | orchestrator | | locked | False | 2026-03-13 01:27:20.286851 | orchestrator | | locked_reason | None | 2026-03-13 01:27:20.286855 | orchestrator | | name | test-4 | 2026-03-13 01:27:20.286864 | orchestrator | | pinned_availability_zone | None | 2026-03-13 01:27:20.286871 | orchestrator | | progress | 0 | 2026-03-13 01:27:20.286877 | orchestrator | | project_id | 8cebd5b89c924bd1aa0e16975392adbb | 2026-03-13 01:27:20.286883 | orchestrator | | properties | hostname='test-4' | 2026-03-13 01:27:20.286893 | orchestrator | | security_groups | name='icmp' | 2026-03-13 01:27:20.286904 | orchestrator | | | name='ssh' | 2026-03-13 01:27:20.286910 | orchestrator | | server_groups | None | 2026-03-13 01:27:20.286916 | orchestrator | | status | ACTIVE | 2026-03-13 01:27:20.286921 | orchestrator | | tags | test | 2026-03-13 01:27:20.286927 | orchestrator | | trusted_image_certificates | None | 2026-03-13 01:27:20.286933 | orchestrator | | updated | 2026-03-13T01:26:16Z | 2026-03-13 01:27:20.286938 | orchestrator | | user_id | ee44de2b56e44b87a3b7dfc4987a84c5 | 2026-03-13 01:27:20.286944 | orchestrator | | volumes_attached | delete_on_termination='True', id='890be6f8-5f81-493f-9f32-aae3481d08b6' | 2026-03-13 01:27:20.291665 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-13 01:27:20.540266 | orchestrator | + server_ping 2026-03-13 01:27:20.540698 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-03-13 01:27:20.540717 | orchestrator | ++ tr -d '\r' 2026-03-13 01:27:23.226359 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-13 01:27:23.226508 | orchestrator | + ping -c3 192.168.112.162 2026-03-13 01:27:23.242861 | orchestrator | PING 192.168.112.162 (192.168.112.162) 56(84) bytes of data. 2026-03-13 01:27:23.242939 | orchestrator | 64 bytes from 192.168.112.162: icmp_seq=1 ttl=63 time=10.7 ms 2026-03-13 01:27:24.236320 | orchestrator | 64 bytes from 192.168.112.162: icmp_seq=2 ttl=63 time=2.18 ms 2026-03-13 01:27:25.238288 | orchestrator | 64 bytes from 192.168.112.162: icmp_seq=3 ttl=63 time=1.74 ms 2026-03-13 01:27:25.238370 | orchestrator | 2026-03-13 01:27:25.238381 | orchestrator | --- 192.168.112.162 ping statistics --- 2026-03-13 01:27:25.238390 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-13 01:27:25.238397 | orchestrator | rtt min/avg/max/mdev = 1.741/4.860/10.666/4.108 ms 2026-03-13 01:27:25.238405 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-13 01:27:25.238413 | orchestrator | + ping -c3 192.168.112.112 2026-03-13 01:27:25.250847 | orchestrator | PING 192.168.112.112 (192.168.112.112) 56(84) bytes of data. 2026-03-13 01:27:25.250915 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=1 ttl=63 time=7.17 ms 2026-03-13 01:27:26.247151 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=2 ttl=63 time=2.04 ms 2026-03-13 01:27:27.248619 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=3 ttl=63 time=1.99 ms 2026-03-13 01:27:27.248705 | orchestrator | 2026-03-13 01:27:27.248712 | orchestrator | --- 192.168.112.112 ping statistics --- 2026-03-13 01:27:27.248719 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-13 01:27:27.248723 | orchestrator | rtt min/avg/max/mdev = 1.993/3.735/7.174/2.431 ms 2026-03-13 01:27:27.249285 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-13 01:27:27.249380 | orchestrator | + ping -c3 192.168.112.179 2026-03-13 01:27:27.263358 | orchestrator | PING 192.168.112.179 (192.168.112.179) 56(84) bytes of data. 2026-03-13 01:27:27.263469 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=1 ttl=63 time=9.35 ms 2026-03-13 01:27:28.258474 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=2 ttl=63 time=2.93 ms 2026-03-13 01:27:29.258310 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=3 ttl=63 time=1.60 ms 2026-03-13 01:27:29.258383 | orchestrator | 2026-03-13 01:27:29.258390 | orchestrator | --- 192.168.112.179 ping statistics --- 2026-03-13 01:27:29.258396 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-13 01:27:29.258400 | orchestrator | rtt min/avg/max/mdev = 1.601/4.627/9.352/3.384 ms 2026-03-13 01:27:29.258833 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-13 01:27:29.258874 | orchestrator | + ping -c3 192.168.112.133 2026-03-13 01:27:29.270430 | orchestrator | PING 192.168.112.133 (192.168.112.133) 56(84) bytes of data. 2026-03-13 01:27:29.270521 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=1 ttl=63 time=6.65 ms 2026-03-13 01:27:30.266913 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=2 ttl=63 time=2.10 ms 2026-03-13 01:27:31.268490 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=3 ttl=63 time=1.73 ms 2026-03-13 01:27:31.268561 | orchestrator | 2026-03-13 01:27:31.268568 | orchestrator | --- 192.168.112.133 ping statistics --- 2026-03-13 01:27:31.268574 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-03-13 01:27:31.268632 | orchestrator | rtt min/avg/max/mdev = 1.731/3.490/6.645/2.235 ms 2026-03-13 01:27:31.268642 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-13 01:27:31.268648 | orchestrator | + ping -c3 192.168.112.108 2026-03-13 01:27:31.278767 | orchestrator | PING 192.168.112.108 (192.168.112.108) 56(84) bytes of data. 2026-03-13 01:27:31.278836 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=1 ttl=63 time=5.79 ms 2026-03-13 01:27:32.277353 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=2 ttl=63 time=2.04 ms 2026-03-13 01:27:33.279232 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=3 ttl=63 time=1.93 ms 2026-03-13 01:27:33.279305 | orchestrator | 2026-03-13 01:27:33.279311 | orchestrator | --- 192.168.112.108 ping statistics --- 2026-03-13 01:27:33.279317 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-13 01:27:33.279322 | orchestrator | rtt min/avg/max/mdev = 1.934/3.252/5.786/1.791 ms 2026-03-13 01:27:33.279326 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-13 01:27:33.279331 | orchestrator | + compute_list 2026-03-13 01:27:33.279335 | orchestrator | + osism manage compute list testbed-node-3 2026-03-13 01:27:35.297198 | orchestrator | 2026-03-13 01:27:35 | ERROR  | Unable to get ansible vault password 2026-03-13 01:27:35.297279 | orchestrator | 2026-03-13 01:27:35 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-13 01:27:35.297290 | orchestrator | 2026-03-13 01:27:35 | ERROR  | Dropping encrypted entries 2026-03-13 01:27:36.712047 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-13 01:27:36.712122 | orchestrator | | ID | Name | Status | 2026-03-13 01:27:36.712127 | orchestrator | |--------------------------------------+--------+----------| 2026-03-13 01:27:36.712132 | orchestrator | | 140b68c4-9b8a-4f3f-9230-5cbf7c98134a | test-4 | ACTIVE | 2026-03-13 01:27:36.712136 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-13 01:27:37.022921 | orchestrator | + osism manage compute list testbed-node-4 2026-03-13 01:27:39.025173 | orchestrator | 2026-03-13 01:27:39 | ERROR  | Unable to get ansible vault password 2026-03-13 01:27:39.025267 | orchestrator | 2026-03-13 01:27:39 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-13 01:27:39.025281 | orchestrator | 2026-03-13 01:27:39 | ERROR  | Dropping encrypted entries 2026-03-13 01:27:40.530871 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-13 01:27:40.530949 | orchestrator | | ID | Name | Status | 2026-03-13 01:27:40.530956 | orchestrator | |--------------------------------------+--------+----------| 2026-03-13 01:27:40.530961 | orchestrator | | 5db10de1-c950-4c0c-a93f-3bb6e055c017 | test-1 | ACTIVE | 2026-03-13 01:27:40.530965 | orchestrator | | 27168077-d802-4639-bb7f-7e68b59b2281 | test | ACTIVE | 2026-03-13 01:27:40.530969 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-13 01:27:40.846538 | orchestrator | + osism manage compute list testbed-node-5 2026-03-13 01:27:42.842682 | orchestrator | 2026-03-13 01:27:42 | ERROR  | Unable to get ansible vault password 2026-03-13 01:27:42.842786 | orchestrator | 2026-03-13 01:27:42 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-13 01:27:42.842800 | orchestrator | 2026-03-13 01:27:42 | ERROR  | Dropping encrypted entries 2026-03-13 01:27:44.367629 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-13 01:27:44.367699 | orchestrator | | ID | Name | Status | 2026-03-13 01:27:44.367704 | orchestrator | |--------------------------------------+--------+----------| 2026-03-13 01:27:44.367709 | orchestrator | | 1bf3b533-6c02-4133-8c7d-8c36e59e3fdd | test-3 | ACTIVE | 2026-03-13 01:27:44.367714 | orchestrator | | d2ae0d1f-6a2d-46e5-97f4-b12d225e89bf | test-2 | ACTIVE | 2026-03-13 01:27:44.367718 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-13 01:27:44.691095 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-4 2026-03-13 01:27:46.699042 | orchestrator | 2026-03-13 01:27:46 | ERROR  | Unable to get ansible vault password 2026-03-13 01:27:46.699187 | orchestrator | 2026-03-13 01:27:46 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-13 01:27:46.699202 | orchestrator | 2026-03-13 01:27:46 | ERROR  | Dropping encrypted entries 2026-03-13 01:27:47.857865 | orchestrator | 2026-03-13 01:27:47 | INFO  | Live migrating server 5db10de1-c950-4c0c-a93f-3bb6e055c017 2026-03-13 01:28:00.584583 | orchestrator | 2026-03-13 01:28:00 | INFO  | Live migration of 5db10de1-c950-4c0c-a93f-3bb6e055c017 (test-1) is still in progress 2026-03-13 01:28:02.913570 | orchestrator | 2026-03-13 01:28:02 | INFO  | Live migration of 5db10de1-c950-4c0c-a93f-3bb6e055c017 (test-1) is still in progress 2026-03-13 01:28:05.317349 | orchestrator | 2026-03-13 01:28:05 | INFO  | Live migration of 5db10de1-c950-4c0c-a93f-3bb6e055c017 (test-1) is still in progress 2026-03-13 01:28:07.976039 | orchestrator | 2026-03-13 01:28:07 | INFO  | Live migration of 5db10de1-c950-4c0c-a93f-3bb6e055c017 (test-1) is still in progress 2026-03-13 01:28:10.266250 | orchestrator | 2026-03-13 01:28:10 | INFO  | Live migration of 5db10de1-c950-4c0c-a93f-3bb6e055c017 (test-1) is still in progress 2026-03-13 01:28:12.480099 | orchestrator | 2026-03-13 01:28:12 | INFO  | Live migration of 5db10de1-c950-4c0c-a93f-3bb6e055c017 (test-1) is still in progress 2026-03-13 01:28:14.679462 | orchestrator | 2026-03-13 01:28:14 | INFO  | Live migration of 5db10de1-c950-4c0c-a93f-3bb6e055c017 (test-1) is still in progress 2026-03-13 01:28:16.966325 | orchestrator | 2026-03-13 01:28:16 | INFO  | Live migration of 5db10de1-c950-4c0c-a93f-3bb6e055c017 (test-1) is still in progress 2026-03-13 01:28:19.297637 | orchestrator | 2026-03-13 01:28:19 | INFO  | Live migration of 5db10de1-c950-4c0c-a93f-3bb6e055c017 (test-1) completed with status ACTIVE 2026-03-13 01:28:19.297785 | orchestrator | 2026-03-13 01:28:19 | INFO  | Live migrating server 27168077-d802-4639-bb7f-7e68b59b2281 2026-03-13 01:28:31.132317 | orchestrator | 2026-03-13 01:28:31 | INFO  | Live migration of 27168077-d802-4639-bb7f-7e68b59b2281 (test) is still in progress 2026-03-13 01:28:33.453618 | orchestrator | 2026-03-13 01:28:33 | INFO  | Live migration of 27168077-d802-4639-bb7f-7e68b59b2281 (test) is still in progress 2026-03-13 01:28:35.778607 | orchestrator | 2026-03-13 01:28:35 | INFO  | Live migration of 27168077-d802-4639-bb7f-7e68b59b2281 (test) is still in progress 2026-03-13 01:28:38.370530 | orchestrator | 2026-03-13 01:28:38 | INFO  | Live migration of 27168077-d802-4639-bb7f-7e68b59b2281 (test) is still in progress 2026-03-13 01:28:40.715065 | orchestrator | 2026-03-13 01:28:40 | INFO  | Live migration of 27168077-d802-4639-bb7f-7e68b59b2281 (test) is still in progress 2026-03-13 01:28:42.980555 | orchestrator | 2026-03-13 01:28:42 | INFO  | Live migration of 27168077-d802-4639-bb7f-7e68b59b2281 (test) is still in progress 2026-03-13 01:28:45.274752 | orchestrator | 2026-03-13 01:28:45 | INFO  | Live migration of 27168077-d802-4639-bb7f-7e68b59b2281 (test) is still in progress 2026-03-13 01:28:47.603539 | orchestrator | 2026-03-13 01:28:47 | INFO  | Live migration of 27168077-d802-4639-bb7f-7e68b59b2281 (test) is still in progress 2026-03-13 01:28:49.875108 | orchestrator | 2026-03-13 01:28:49 | INFO  | Live migration of 27168077-d802-4639-bb7f-7e68b59b2281 (test) is still in progress 2026-03-13 01:28:52.241657 | orchestrator | 2026-03-13 01:28:52 | INFO  | Live migration of 27168077-d802-4639-bb7f-7e68b59b2281 (test) is still in progress 2026-03-13 01:28:54.577026 | orchestrator | 2026-03-13 01:28:54 | INFO  | Live migration of 27168077-d802-4639-bb7f-7e68b59b2281 (test) completed with status ACTIVE 2026-03-13 01:28:54.929637 | orchestrator | + compute_list 2026-03-13 01:28:54.929778 | orchestrator | + osism manage compute list testbed-node-3 2026-03-13 01:28:57.075563 | orchestrator | 2026-03-13 01:28:57 | ERROR  | Unable to get ansible vault password 2026-03-13 01:28:57.075648 | orchestrator | 2026-03-13 01:28:57 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-13 01:28:57.075660 | orchestrator | 2026-03-13 01:28:57 | ERROR  | Dropping encrypted entries 2026-03-13 01:28:58.314881 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-13 01:28:58.314960 | orchestrator | | ID | Name | Status | 2026-03-13 01:28:58.314966 | orchestrator | |--------------------------------------+--------+----------| 2026-03-13 01:28:58.314971 | orchestrator | | 140b68c4-9b8a-4f3f-9230-5cbf7c98134a | test-4 | ACTIVE | 2026-03-13 01:28:58.314975 | orchestrator | | 5db10de1-c950-4c0c-a93f-3bb6e055c017 | test-1 | ACTIVE | 2026-03-13 01:28:58.314979 | orchestrator | | 27168077-d802-4639-bb7f-7e68b59b2281 | test | ACTIVE | 2026-03-13 01:28:58.314983 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-13 01:28:58.688023 | orchestrator | + osism manage compute list testbed-node-4 2026-03-13 01:29:00.807664 | orchestrator | 2026-03-13 01:29:00 | ERROR  | Unable to get ansible vault password 2026-03-13 01:29:00.807805 | orchestrator | 2026-03-13 01:29:00 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-13 01:29:00.807820 | orchestrator | 2026-03-13 01:29:00 | ERROR  | Dropping encrypted entries 2026-03-13 01:29:01.714724 | orchestrator | +------+--------+----------+ 2026-03-13 01:29:01.714815 | orchestrator | | ID | Name | Status | 2026-03-13 01:29:01.714826 | orchestrator | |------+--------+----------| 2026-03-13 01:29:01.714833 | orchestrator | +------+--------+----------+ 2026-03-13 01:29:02.115952 | orchestrator | + osism manage compute list testbed-node-5 2026-03-13 01:29:04.363413 | orchestrator | 2026-03-13 01:29:04 | ERROR  | Unable to get ansible vault password 2026-03-13 01:29:04.363479 | orchestrator | 2026-03-13 01:29:04 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-13 01:29:04.363677 | orchestrator | 2026-03-13 01:29:04 | ERROR  | Dropping encrypted entries 2026-03-13 01:29:05.394191 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-13 01:29:05.394246 | orchestrator | | ID | Name | Status | 2026-03-13 01:29:05.394255 | orchestrator | |--------------------------------------+--------+----------| 2026-03-13 01:29:05.394262 | orchestrator | | 1bf3b533-6c02-4133-8c7d-8c36e59e3fdd | test-3 | ACTIVE | 2026-03-13 01:29:05.394268 | orchestrator | | d2ae0d1f-6a2d-46e5-97f4-b12d225e89bf | test-2 | ACTIVE | 2026-03-13 01:29:05.394275 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-13 01:29:05.704399 | orchestrator | + server_ping 2026-03-13 01:29:05.705762 | orchestrator | ++ tr -d '\r' 2026-03-13 01:29:05.705793 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-03-13 01:29:08.459042 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-13 01:29:08.459114 | orchestrator | + ping -c3 192.168.112.162 2026-03-13 01:29:08.473146 | orchestrator | PING 192.168.112.162 (192.168.112.162) 56(84) bytes of data. 2026-03-13 01:29:08.473216 | orchestrator | 64 bytes from 192.168.112.162: icmp_seq=1 ttl=63 time=10.4 ms 2026-03-13 01:29:09.466638 | orchestrator | 64 bytes from 192.168.112.162: icmp_seq=2 ttl=63 time=2.16 ms 2026-03-13 01:29:10.466892 | orchestrator | 64 bytes from 192.168.112.162: icmp_seq=3 ttl=63 time=1.46 ms 2026-03-13 01:29:10.466969 | orchestrator | 2026-03-13 01:29:10.466977 | orchestrator | --- 192.168.112.162 ping statistics --- 2026-03-13 01:29:10.466983 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-13 01:29:10.466987 | orchestrator | rtt min/avg/max/mdev = 1.458/4.681/10.429/4.074 ms 2026-03-13 01:29:10.468171 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-13 01:29:10.468255 | orchestrator | + ping -c3 192.168.112.112 2026-03-13 01:29:10.479154 | orchestrator | PING 192.168.112.112 (192.168.112.112) 56(84) bytes of data. 2026-03-13 01:29:10.479259 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=1 ttl=63 time=7.14 ms 2026-03-13 01:29:11.475522 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=2 ttl=63 time=1.87 ms 2026-03-13 01:29:12.477077 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=3 ttl=63 time=1.71 ms 2026-03-13 01:29:12.477185 | orchestrator | 2026-03-13 01:29:12.477198 | orchestrator | --- 192.168.112.112 ping statistics --- 2026-03-13 01:29:12.477207 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-13 01:29:12.477214 | orchestrator | rtt min/avg/max/mdev = 1.712/3.573/7.135/2.519 ms 2026-03-13 01:29:12.477221 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-13 01:29:12.477228 | orchestrator | + ping -c3 192.168.112.179 2026-03-13 01:29:12.489276 | orchestrator | PING 192.168.112.179 (192.168.112.179) 56(84) bytes of data. 2026-03-13 01:29:12.489377 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=1 ttl=63 time=7.51 ms 2026-03-13 01:29:13.485675 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=2 ttl=63 time=2.04 ms 2026-03-13 01:29:14.487957 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=3 ttl=63 time=2.37 ms 2026-03-13 01:29:14.488041 | orchestrator | 2026-03-13 01:29:14.488051 | orchestrator | --- 192.168.112.179 ping statistics --- 2026-03-13 01:29:14.488060 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-13 01:29:14.488066 | orchestrator | rtt min/avg/max/mdev = 2.042/3.975/7.514/2.506 ms 2026-03-13 01:29:14.488073 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-13 01:29:14.488081 | orchestrator | + ping -c3 192.168.112.133 2026-03-13 01:29:14.499981 | orchestrator | PING 192.168.112.133 (192.168.112.133) 56(84) bytes of data. 2026-03-13 01:29:14.500070 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=1 ttl=63 time=8.76 ms 2026-03-13 01:29:15.494066 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=2 ttl=63 time=2.71 ms 2026-03-13 01:29:16.494924 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=3 ttl=63 time=1.24 ms 2026-03-13 01:29:16.494968 | orchestrator | 2026-03-13 01:29:16.494973 | orchestrator | --- 192.168.112.133 ping statistics --- 2026-03-13 01:29:16.494980 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-03-13 01:29:16.494987 | orchestrator | rtt min/avg/max/mdev = 1.241/4.235/8.759/3.254 ms 2026-03-13 01:29:16.494994 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-13 01:29:16.495000 | orchestrator | + ping -c3 192.168.112.108 2026-03-13 01:29:16.502975 | orchestrator | PING 192.168.112.108 (192.168.112.108) 56(84) bytes of data. 2026-03-13 01:29:16.503045 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=1 ttl=63 time=3.61 ms 2026-03-13 01:29:17.502967 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=2 ttl=63 time=1.63 ms 2026-03-13 01:29:18.505816 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=3 ttl=63 time=1.85 ms 2026-03-13 01:29:18.505917 | orchestrator | 2026-03-13 01:29:18.505948 | orchestrator | --- 192.168.112.108 ping statistics --- 2026-03-13 01:29:18.505957 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-03-13 01:29:18.505973 | orchestrator | rtt min/avg/max/mdev = 1.627/2.362/3.607/0.884 ms 2026-03-13 01:29:18.505981 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-5 2026-03-13 01:29:20.589935 | orchestrator | 2026-03-13 01:29:20 | ERROR  | Unable to get ansible vault password 2026-03-13 01:29:20.590069 | orchestrator | 2026-03-13 01:29:20 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-13 01:29:20.590086 | orchestrator | 2026-03-13 01:29:20 | ERROR  | Dropping encrypted entries 2026-03-13 01:29:21.725963 | orchestrator | 2026-03-13 01:29:21 | INFO  | Live migrating server 1bf3b533-6c02-4133-8c7d-8c36e59e3fdd 2026-03-13 01:29:31.948780 | orchestrator | 2026-03-13 01:29:31 | INFO  | Live migration of 1bf3b533-6c02-4133-8c7d-8c36e59e3fdd (test-3) is still in progress 2026-03-13 01:29:34.291388 | orchestrator | 2026-03-13 01:29:34 | INFO  | Live migration of 1bf3b533-6c02-4133-8c7d-8c36e59e3fdd (test-3) is still in progress 2026-03-13 01:29:36.882111 | orchestrator | 2026-03-13 01:29:36 | INFO  | Live migration of 1bf3b533-6c02-4133-8c7d-8c36e59e3fdd (test-3) is still in progress 2026-03-13 01:29:39.189613 | orchestrator | 2026-03-13 01:29:39 | INFO  | Live migration of 1bf3b533-6c02-4133-8c7d-8c36e59e3fdd (test-3) is still in progress 2026-03-13 01:29:41.392599 | orchestrator | 2026-03-13 01:29:41 | INFO  | Live migration of 1bf3b533-6c02-4133-8c7d-8c36e59e3fdd (test-3) is still in progress 2026-03-13 01:29:43.645902 | orchestrator | 2026-03-13 01:29:43 | INFO  | Live migration of 1bf3b533-6c02-4133-8c7d-8c36e59e3fdd (test-3) is still in progress 2026-03-13 01:29:45.881898 | orchestrator | 2026-03-13 01:29:45 | INFO  | Live migration of 1bf3b533-6c02-4133-8c7d-8c36e59e3fdd (test-3) is still in progress 2026-03-13 01:29:48.130003 | orchestrator | 2026-03-13 01:29:48 | INFO  | Live migration of 1bf3b533-6c02-4133-8c7d-8c36e59e3fdd (test-3) is still in progress 2026-03-13 01:29:50.373501 | orchestrator | 2026-03-13 01:29:50 | INFO  | Live migration of 1bf3b533-6c02-4133-8c7d-8c36e59e3fdd (test-3) is still in progress 2026-03-13 01:29:52.652119 | orchestrator | 2026-03-13 01:29:52 | INFO  | Live migration of 1bf3b533-6c02-4133-8c7d-8c36e59e3fdd (test-3) completed with status ACTIVE 2026-03-13 01:29:52.652204 | orchestrator | 2026-03-13 01:29:52 | INFO  | Live migrating server d2ae0d1f-6a2d-46e5-97f4-b12d225e89bf 2026-03-13 01:30:04.458309 | orchestrator | 2026-03-13 01:30:04 | INFO  | Live migration of d2ae0d1f-6a2d-46e5-97f4-b12d225e89bf (test-2) is still in progress 2026-03-13 01:30:06.755559 | orchestrator | 2026-03-13 01:30:06 | INFO  | Live migration of d2ae0d1f-6a2d-46e5-97f4-b12d225e89bf (test-2) is still in progress 2026-03-13 01:30:09.119187 | orchestrator | 2026-03-13 01:30:09 | INFO  | Live migration of d2ae0d1f-6a2d-46e5-97f4-b12d225e89bf (test-2) is still in progress 2026-03-13 01:30:11.455575 | orchestrator | 2026-03-13 01:30:11 | INFO  | Live migration of d2ae0d1f-6a2d-46e5-97f4-b12d225e89bf (test-2) is still in progress 2026-03-13 01:30:13.693313 | orchestrator | 2026-03-13 01:30:13 | INFO  | Live migration of d2ae0d1f-6a2d-46e5-97f4-b12d225e89bf (test-2) is still in progress 2026-03-13 01:30:15.975107 | orchestrator | 2026-03-13 01:30:15 | INFO  | Live migration of d2ae0d1f-6a2d-46e5-97f4-b12d225e89bf (test-2) is still in progress 2026-03-13 01:30:18.241048 | orchestrator | 2026-03-13 01:30:18 | INFO  | Live migration of d2ae0d1f-6a2d-46e5-97f4-b12d225e89bf (test-2) is still in progress 2026-03-13 01:30:20.566248 | orchestrator | 2026-03-13 01:30:20 | INFO  | Live migration of d2ae0d1f-6a2d-46e5-97f4-b12d225e89bf (test-2) is still in progress 2026-03-13 01:30:22.948161 | orchestrator | 2026-03-13 01:30:22 | INFO  | Live migration of d2ae0d1f-6a2d-46e5-97f4-b12d225e89bf (test-2) completed with status ACTIVE 2026-03-13 01:30:23.299089 | orchestrator | + compute_list 2026-03-13 01:30:23.299213 | orchestrator | + osism manage compute list testbed-node-3 2026-03-13 01:30:25.128089 | orchestrator | 2026-03-13 01:30:25 | ERROR  | Unable to get ansible vault password 2026-03-13 01:30:25.128160 | orchestrator | 2026-03-13 01:30:25 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-13 01:30:25.128168 | orchestrator | 2026-03-13 01:30:25 | ERROR  | Dropping encrypted entries 2026-03-13 01:30:26.736109 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-13 01:30:26.736185 | orchestrator | | ID | Name | Status | 2026-03-13 01:30:26.736213 | orchestrator | |--------------------------------------+--------+----------| 2026-03-13 01:30:26.736218 | orchestrator | | 140b68c4-9b8a-4f3f-9230-5cbf7c98134a | test-4 | ACTIVE | 2026-03-13 01:30:26.736222 | orchestrator | | 1bf3b533-6c02-4133-8c7d-8c36e59e3fdd | test-3 | ACTIVE | 2026-03-13 01:30:26.736226 | orchestrator | | 5db10de1-c950-4c0c-a93f-3bb6e055c017 | test-1 | ACTIVE | 2026-03-13 01:30:26.736230 | orchestrator | | d2ae0d1f-6a2d-46e5-97f4-b12d225e89bf | test-2 | ACTIVE | 2026-03-13 01:30:26.736234 | orchestrator | | 27168077-d802-4639-bb7f-7e68b59b2281 | test | ACTIVE | 2026-03-13 01:30:26.736238 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-13 01:30:27.067738 | orchestrator | + osism manage compute list testbed-node-4 2026-03-13 01:30:29.071046 | orchestrator | 2026-03-13 01:30:29 | ERROR  | Unable to get ansible vault password 2026-03-13 01:30:29.071146 | orchestrator | 2026-03-13 01:30:29 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-13 01:30:29.071160 | orchestrator | 2026-03-13 01:30:29 | ERROR  | Dropping encrypted entries 2026-03-13 01:30:29.856802 | orchestrator | +------+--------+----------+ 2026-03-13 01:30:29.856875 | orchestrator | | ID | Name | Status | 2026-03-13 01:30:29.856881 | orchestrator | |------+--------+----------| 2026-03-13 01:30:29.856885 | orchestrator | +------+--------+----------+ 2026-03-13 01:30:30.191247 | orchestrator | + osism manage compute list testbed-node-5 2026-03-13 01:30:32.135131 | orchestrator | 2026-03-13 01:30:32 | ERROR  | Unable to get ansible vault password 2026-03-13 01:30:32.135215 | orchestrator | 2026-03-13 01:30:32 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-13 01:30:32.135223 | orchestrator | 2026-03-13 01:30:32 | ERROR  | Dropping encrypted entries 2026-03-13 01:30:32.947550 | orchestrator | +------+--------+----------+ 2026-03-13 01:30:32.947633 | orchestrator | | ID | Name | Status | 2026-03-13 01:30:32.947645 | orchestrator | |------+--------+----------| 2026-03-13 01:30:32.947651 | orchestrator | +------+--------+----------+ 2026-03-13 01:30:33.304404 | orchestrator | + server_ping 2026-03-13 01:30:33.304753 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-03-13 01:30:33.305147 | orchestrator | ++ tr -d '\r' 2026-03-13 01:30:36.055662 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-13 01:30:36.055789 | orchestrator | + ping -c3 192.168.112.162 2026-03-13 01:30:36.074315 | orchestrator | PING 192.168.112.162 (192.168.112.162) 56(84) bytes of data. 2026-03-13 01:30:36.074378 | orchestrator | 64 bytes from 192.168.112.162: icmp_seq=1 ttl=63 time=9.79 ms 2026-03-13 01:30:37.068416 | orchestrator | 64 bytes from 192.168.112.162: icmp_seq=2 ttl=63 time=2.17 ms 2026-03-13 01:30:38.069757 | orchestrator | 64 bytes from 192.168.112.162: icmp_seq=3 ttl=63 time=1.99 ms 2026-03-13 01:30:38.069863 | orchestrator | 2026-03-13 01:30:38.069874 | orchestrator | --- 192.168.112.162 ping statistics --- 2026-03-13 01:30:38.069913 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-13 01:30:38.069922 | orchestrator | rtt min/avg/max/mdev = 1.986/4.648/9.791/3.637 ms 2026-03-13 01:30:38.069941 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-13 01:30:38.069949 | orchestrator | + ping -c3 192.168.112.112 2026-03-13 01:30:38.079648 | orchestrator | PING 192.168.112.112 (192.168.112.112) 56(84) bytes of data. 2026-03-13 01:30:38.079747 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=1 ttl=63 time=6.32 ms 2026-03-13 01:30:39.077330 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=2 ttl=63 time=2.08 ms 2026-03-13 01:30:40.078075 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=3 ttl=63 time=1.77 ms 2026-03-13 01:30:40.078151 | orchestrator | 2026-03-13 01:30:40.078161 | orchestrator | --- 192.168.112.112 ping statistics --- 2026-03-13 01:30:40.078169 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-13 01:30:40.078176 | orchestrator | rtt min/avg/max/mdev = 1.768/3.387/6.316/2.074 ms 2026-03-13 01:30:40.078939 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-13 01:30:40.079005 | orchestrator | + ping -c3 192.168.112.179 2026-03-13 01:30:40.086351 | orchestrator | PING 192.168.112.179 (192.168.112.179) 56(84) bytes of data. 2026-03-13 01:30:40.086411 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=1 ttl=63 time=4.83 ms 2026-03-13 01:30:41.084432 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=2 ttl=63 time=1.90 ms 2026-03-13 01:30:42.087129 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=3 ttl=63 time=1.78 ms 2026-03-13 01:30:42.087207 | orchestrator | 2026-03-13 01:30:42.087216 | orchestrator | --- 192.168.112.179 ping statistics --- 2026-03-13 01:30:42.087225 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-03-13 01:30:42.087231 | orchestrator | rtt min/avg/max/mdev = 1.778/2.835/4.832/1.412 ms 2026-03-13 01:30:42.087238 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-13 01:30:42.087246 | orchestrator | + ping -c3 192.168.112.133 2026-03-13 01:30:42.101479 | orchestrator | PING 192.168.112.133 (192.168.112.133) 56(84) bytes of data. 2026-03-13 01:30:42.101545 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=1 ttl=63 time=9.41 ms 2026-03-13 01:30:43.096015 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=2 ttl=63 time=2.23 ms 2026-03-13 01:30:44.097084 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=3 ttl=63 time=1.71 ms 2026-03-13 01:30:44.097163 | orchestrator | 2026-03-13 01:30:44.097172 | orchestrator | --- 192.168.112.133 ping statistics --- 2026-03-13 01:30:44.097180 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-13 01:30:44.097187 | orchestrator | rtt min/avg/max/mdev = 1.714/4.449/9.406/3.510 ms 2026-03-13 01:30:44.097501 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-13 01:30:44.097520 | orchestrator | + ping -c3 192.168.112.108 2026-03-13 01:30:44.107021 | orchestrator | PING 192.168.112.108 (192.168.112.108) 56(84) bytes of data. 2026-03-13 01:30:44.107080 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=1 ttl=63 time=4.68 ms 2026-03-13 01:30:45.105951 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=2 ttl=63 time=2.54 ms 2026-03-13 01:30:46.106102 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=3 ttl=63 time=1.64 ms 2026-03-13 01:30:46.106171 | orchestrator | 2026-03-13 01:30:46.106177 | orchestrator | --- 192.168.112.108 ping statistics --- 2026-03-13 01:30:46.106183 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-03-13 01:30:46.106189 | orchestrator | rtt min/avg/max/mdev = 1.639/2.952/4.679/1.274 ms 2026-03-13 01:30:46.107526 | orchestrator | + osism manage compute migrate --yes --target testbed-node-4 testbed-node-3 2026-03-13 01:30:48.256615 | orchestrator | 2026-03-13 01:30:48 | ERROR  | Unable to get ansible vault password 2026-03-13 01:30:48.256743 | orchestrator | 2026-03-13 01:30:48 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-13 01:30:48.256757 | orchestrator | 2026-03-13 01:30:48 | ERROR  | Dropping encrypted entries 2026-03-13 01:30:49.609402 | orchestrator | 2026-03-13 01:30:49 | INFO  | Live migrating server 140b68c4-9b8a-4f3f-9230-5cbf7c98134a 2026-03-13 01:31:00.575006 | orchestrator | 2026-03-13 01:31:00 | INFO  | Live migration of 140b68c4-9b8a-4f3f-9230-5cbf7c98134a (test-4) is still in progress 2026-03-13 01:31:02.895449 | orchestrator | 2026-03-13 01:31:02 | INFO  | Live migration of 140b68c4-9b8a-4f3f-9230-5cbf7c98134a (test-4) is still in progress 2026-03-13 01:31:05.270413 | orchestrator | 2026-03-13 01:31:05 | INFO  | Live migration of 140b68c4-9b8a-4f3f-9230-5cbf7c98134a (test-4) is still in progress 2026-03-13 01:31:07.715226 | orchestrator | 2026-03-13 01:31:07 | INFO  | Live migration of 140b68c4-9b8a-4f3f-9230-5cbf7c98134a (test-4) is still in progress 2026-03-13 01:31:10.145402 | orchestrator | 2026-03-13 01:31:10 | INFO  | Live migration of 140b68c4-9b8a-4f3f-9230-5cbf7c98134a (test-4) is still in progress 2026-03-13 01:31:12.356616 | orchestrator | 2026-03-13 01:31:12 | INFO  | Live migration of 140b68c4-9b8a-4f3f-9230-5cbf7c98134a (test-4) is still in progress 2026-03-13 01:31:14.635207 | orchestrator | 2026-03-13 01:31:14 | INFO  | Live migration of 140b68c4-9b8a-4f3f-9230-5cbf7c98134a (test-4) is still in progress 2026-03-13 01:31:16.880247 | orchestrator | 2026-03-13 01:31:16 | INFO  | Live migration of 140b68c4-9b8a-4f3f-9230-5cbf7c98134a (test-4) is still in progress 2026-03-13 01:31:19.249913 | orchestrator | 2026-03-13 01:31:19 | INFO  | Live migration of 140b68c4-9b8a-4f3f-9230-5cbf7c98134a (test-4) completed with status ACTIVE 2026-03-13 01:31:19.249980 | orchestrator | 2026-03-13 01:31:19 | INFO  | Live migrating server 1bf3b533-6c02-4133-8c7d-8c36e59e3fdd 2026-03-13 01:31:29.528369 | orchestrator | 2026-03-13 01:31:29 | INFO  | Live migration of 1bf3b533-6c02-4133-8c7d-8c36e59e3fdd (test-3) is still in progress 2026-03-13 01:31:31.806605 | orchestrator | 2026-03-13 01:31:31 | INFO  | Live migration of 1bf3b533-6c02-4133-8c7d-8c36e59e3fdd (test-3) is still in progress 2026-03-13 01:31:34.123008 | orchestrator | 2026-03-13 01:31:34 | INFO  | Live migration of 1bf3b533-6c02-4133-8c7d-8c36e59e3fdd (test-3) is still in progress 2026-03-13 01:31:36.450389 | orchestrator | 2026-03-13 01:31:36 | INFO  | Live migration of 1bf3b533-6c02-4133-8c7d-8c36e59e3fdd (test-3) is still in progress 2026-03-13 01:31:38.697234 | orchestrator | 2026-03-13 01:31:38 | INFO  | Live migration of 1bf3b533-6c02-4133-8c7d-8c36e59e3fdd (test-3) is still in progress 2026-03-13 01:31:40.973102 | orchestrator | 2026-03-13 01:31:40 | INFO  | Live migration of 1bf3b533-6c02-4133-8c7d-8c36e59e3fdd (test-3) is still in progress 2026-03-13 01:31:43.306828 | orchestrator | 2026-03-13 01:31:43 | INFO  | Live migration of 1bf3b533-6c02-4133-8c7d-8c36e59e3fdd (test-3) is still in progress 2026-03-13 01:31:45.623598 | orchestrator | 2026-03-13 01:31:45 | INFO  | Live migration of 1bf3b533-6c02-4133-8c7d-8c36e59e3fdd (test-3) is still in progress 2026-03-13 01:31:47.909590 | orchestrator | 2026-03-13 01:31:47 | INFO  | Live migration of 1bf3b533-6c02-4133-8c7d-8c36e59e3fdd (test-3) completed with status ACTIVE 2026-03-13 01:31:47.909700 | orchestrator | 2026-03-13 01:31:47 | INFO  | Live migrating server 5db10de1-c950-4c0c-a93f-3bb6e055c017 2026-03-13 01:31:58.571114 | orchestrator | 2026-03-13 01:31:58 | INFO  | Live migration of 5db10de1-c950-4c0c-a93f-3bb6e055c017 (test-1) is still in progress 2026-03-13 01:32:00.889943 | orchestrator | 2026-03-13 01:32:00 | INFO  | Live migration of 5db10de1-c950-4c0c-a93f-3bb6e055c017 (test-1) is still in progress 2026-03-13 01:32:03.149698 | orchestrator | 2026-03-13 01:32:03 | INFO  | Live migration of 5db10de1-c950-4c0c-a93f-3bb6e055c017 (test-1) is still in progress 2026-03-13 01:32:05.351094 | orchestrator | 2026-03-13 01:32:05 | INFO  | Live migration of 5db10de1-c950-4c0c-a93f-3bb6e055c017 (test-1) is still in progress 2026-03-13 01:32:07.681553 | orchestrator | 2026-03-13 01:32:07 | INFO  | Live migration of 5db10de1-c950-4c0c-a93f-3bb6e055c017 (test-1) is still in progress 2026-03-13 01:32:09.950536 | orchestrator | 2026-03-13 01:32:09 | INFO  | Live migration of 5db10de1-c950-4c0c-a93f-3bb6e055c017 (test-1) is still in progress 2026-03-13 01:32:12.214972 | orchestrator | 2026-03-13 01:32:12 | INFO  | Live migration of 5db10de1-c950-4c0c-a93f-3bb6e055c017 (test-1) is still in progress 2026-03-13 01:32:14.465540 | orchestrator | 2026-03-13 01:32:14 | INFO  | Live migration of 5db10de1-c950-4c0c-a93f-3bb6e055c017 (test-1) is still in progress 2026-03-13 01:32:16.757496 | orchestrator | 2026-03-13 01:32:16 | INFO  | Live migration of 5db10de1-c950-4c0c-a93f-3bb6e055c017 (test-1) completed with status ACTIVE 2026-03-13 01:32:16.757584 | orchestrator | 2026-03-13 01:32:16 | INFO  | Live migrating server d2ae0d1f-6a2d-46e5-97f4-b12d225e89bf 2026-03-13 01:32:27.793001 | orchestrator | 2026-03-13 01:32:27 | INFO  | Live migration of d2ae0d1f-6a2d-46e5-97f4-b12d225e89bf (test-2) is still in progress 2026-03-13 01:32:30.130114 | orchestrator | 2026-03-13 01:32:30 | INFO  | Live migration of d2ae0d1f-6a2d-46e5-97f4-b12d225e89bf (test-2) is still in progress 2026-03-13 01:32:32.451920 | orchestrator | 2026-03-13 01:32:32 | INFO  | Live migration of d2ae0d1f-6a2d-46e5-97f4-b12d225e89bf (test-2) is still in progress 2026-03-13 01:32:34.760361 | orchestrator | 2026-03-13 01:32:34 | INFO  | Live migration of d2ae0d1f-6a2d-46e5-97f4-b12d225e89bf (test-2) is still in progress 2026-03-13 01:32:37.011282 | orchestrator | 2026-03-13 01:32:37 | INFO  | Live migration of d2ae0d1f-6a2d-46e5-97f4-b12d225e89bf (test-2) is still in progress 2026-03-13 01:32:39.295498 | orchestrator | 2026-03-13 01:32:39 | INFO  | Live migration of d2ae0d1f-6a2d-46e5-97f4-b12d225e89bf (test-2) is still in progress 2026-03-13 01:32:41.579130 | orchestrator | 2026-03-13 01:32:41 | INFO  | Live migration of d2ae0d1f-6a2d-46e5-97f4-b12d225e89bf (test-2) is still in progress 2026-03-13 01:32:43.798468 | orchestrator | 2026-03-13 01:32:43 | INFO  | Live migration of d2ae0d1f-6a2d-46e5-97f4-b12d225e89bf (test-2) is still in progress 2026-03-13 01:32:46.166246 | orchestrator | 2026-03-13 01:32:46 | INFO  | Live migration of d2ae0d1f-6a2d-46e5-97f4-b12d225e89bf (test-2) completed with status ACTIVE 2026-03-13 01:32:46.166328 | orchestrator | 2026-03-13 01:32:46 | INFO  | Live migrating server 27168077-d802-4639-bb7f-7e68b59b2281 2026-03-13 01:32:55.362888 | orchestrator | 2026-03-13 01:32:55 | INFO  | Live migration of 27168077-d802-4639-bb7f-7e68b59b2281 (test) is still in progress 2026-03-13 01:32:57.695170 | orchestrator | 2026-03-13 01:32:57 | INFO  | Live migration of 27168077-d802-4639-bb7f-7e68b59b2281 (test) is still in progress 2026-03-13 01:33:00.054251 | orchestrator | 2026-03-13 01:33:00 | INFO  | Live migration of 27168077-d802-4639-bb7f-7e68b59b2281 (test) is still in progress 2026-03-13 01:33:02.331044 | orchestrator | 2026-03-13 01:33:02 | INFO  | Live migration of 27168077-d802-4639-bb7f-7e68b59b2281 (test) is still in progress 2026-03-13 01:33:04.609259 | orchestrator | 2026-03-13 01:33:04 | INFO  | Live migration of 27168077-d802-4639-bb7f-7e68b59b2281 (test) is still in progress 2026-03-13 01:33:06.942169 | orchestrator | 2026-03-13 01:33:06 | INFO  | Live migration of 27168077-d802-4639-bb7f-7e68b59b2281 (test) is still in progress 2026-03-13 01:33:09.271487 | orchestrator | 2026-03-13 01:33:09 | INFO  | Live migration of 27168077-d802-4639-bb7f-7e68b59b2281 (test) is still in progress 2026-03-13 01:33:11.561023 | orchestrator | 2026-03-13 01:33:11 | INFO  | Live migration of 27168077-d802-4639-bb7f-7e68b59b2281 (test) is still in progress 2026-03-13 01:33:13.834663 | orchestrator | 2026-03-13 01:33:13 | INFO  | Live migration of 27168077-d802-4639-bb7f-7e68b59b2281 (test) is still in progress 2026-03-13 01:33:16.199648 | orchestrator | 2026-03-13 01:33:16 | INFO  | Live migration of 27168077-d802-4639-bb7f-7e68b59b2281 (test) is still in progress 2026-03-13 01:33:18.532089 | orchestrator | 2026-03-13 01:33:18 | INFO  | Live migration of 27168077-d802-4639-bb7f-7e68b59b2281 (test) completed with status ACTIVE 2026-03-13 01:33:18.846985 | orchestrator | + compute_list 2026-03-13 01:33:18.847066 | orchestrator | + osism manage compute list testbed-node-3 2026-03-13 01:33:20.950462 | orchestrator | 2026-03-13 01:33:20 | ERROR  | Unable to get ansible vault password 2026-03-13 01:33:20.950542 | orchestrator | 2026-03-13 01:33:20 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-13 01:33:20.950582 | orchestrator | 2026-03-13 01:33:20 | ERROR  | Dropping encrypted entries 2026-03-13 01:33:21.696490 | orchestrator | +------+--------+----------+ 2026-03-13 01:33:21.696534 | orchestrator | | ID | Name | Status | 2026-03-13 01:33:21.696540 | orchestrator | |------+--------+----------| 2026-03-13 01:33:21.696544 | orchestrator | +------+--------+----------+ 2026-03-13 01:33:22.046376 | orchestrator | + osism manage compute list testbed-node-4 2026-03-13 01:33:23.889367 | orchestrator | 2026-03-13 01:33:23 | ERROR  | Unable to get ansible vault password 2026-03-13 01:33:23.889440 | orchestrator | 2026-03-13 01:33:23 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-13 01:33:23.889450 | orchestrator | 2026-03-13 01:33:23 | ERROR  | Dropping encrypted entries 2026-03-13 01:33:25.134921 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-13 01:33:25.135004 | orchestrator | | ID | Name | Status | 2026-03-13 01:33:25.135015 | orchestrator | |--------------------------------------+--------+----------| 2026-03-13 01:33:25.135040 | orchestrator | | 140b68c4-9b8a-4f3f-9230-5cbf7c98134a | test-4 | ACTIVE | 2026-03-13 01:33:25.135046 | orchestrator | | 1bf3b533-6c02-4133-8c7d-8c36e59e3fdd | test-3 | ACTIVE | 2026-03-13 01:33:25.135050 | orchestrator | | 5db10de1-c950-4c0c-a93f-3bb6e055c017 | test-1 | ACTIVE | 2026-03-13 01:33:25.135054 | orchestrator | | d2ae0d1f-6a2d-46e5-97f4-b12d225e89bf | test-2 | ACTIVE | 2026-03-13 01:33:25.135058 | orchestrator | | 27168077-d802-4639-bb7f-7e68b59b2281 | test | ACTIVE | 2026-03-13 01:33:25.135062 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-13 01:33:25.375301 | orchestrator | + osism manage compute list testbed-node-5 2026-03-13 01:33:27.157349 | orchestrator | 2026-03-13 01:33:27 | ERROR  | Unable to get ansible vault password 2026-03-13 01:33:27.157417 | orchestrator | 2026-03-13 01:33:27 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-13 01:33:27.157425 | orchestrator | 2026-03-13 01:33:27 | ERROR  | Dropping encrypted entries 2026-03-13 01:33:28.053838 | orchestrator | +------+--------+----------+ 2026-03-13 01:33:28.053916 | orchestrator | | ID | Name | Status | 2026-03-13 01:33:28.053923 | orchestrator | |------+--------+----------| 2026-03-13 01:33:28.053928 | orchestrator | +------+--------+----------+ 2026-03-13 01:33:28.298397 | orchestrator | + server_ping 2026-03-13 01:33:28.299764 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-03-13 01:33:28.299822 | orchestrator | ++ tr -d '\r' 2026-03-13 01:33:30.617032 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-13 01:33:30.617112 | orchestrator | + ping -c3 192.168.112.162 2026-03-13 01:33:30.626787 | orchestrator | PING 192.168.112.162 (192.168.112.162) 56(84) bytes of data. 2026-03-13 01:33:30.626853 | orchestrator | 64 bytes from 192.168.112.162: icmp_seq=1 ttl=63 time=8.40 ms 2026-03-13 01:33:31.621185 | orchestrator | 64 bytes from 192.168.112.162: icmp_seq=2 ttl=63 time=2.07 ms 2026-03-13 01:33:32.622583 | orchestrator | 64 bytes from 192.168.112.162: icmp_seq=3 ttl=63 time=1.83 ms 2026-03-13 01:33:32.622669 | orchestrator | 2026-03-13 01:33:32.622676 | orchestrator | --- 192.168.112.162 ping statistics --- 2026-03-13 01:33:32.622682 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-03-13 01:33:32.622687 | orchestrator | rtt min/avg/max/mdev = 1.830/4.098/8.397/3.041 ms 2026-03-13 01:33:32.623115 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-13 01:33:32.623137 | orchestrator | + ping -c3 192.168.112.112 2026-03-13 01:33:32.635685 | orchestrator | PING 192.168.112.112 (192.168.112.112) 56(84) bytes of data. 2026-03-13 01:33:32.635755 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=1 ttl=63 time=8.88 ms 2026-03-13 01:33:33.630410 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=2 ttl=63 time=2.49 ms 2026-03-13 01:33:34.631117 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=3 ttl=63 time=1.47 ms 2026-03-13 01:33:34.631234 | orchestrator | 2026-03-13 01:33:34.631588 | orchestrator | --- 192.168.112.112 ping statistics --- 2026-03-13 01:33:34.631622 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-13 01:33:34.631629 | orchestrator | rtt min/avg/max/mdev = 1.466/4.279/8.882/3.281 ms 2026-03-13 01:33:34.631637 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-13 01:33:34.631644 | orchestrator | + ping -c3 192.168.112.179 2026-03-13 01:33:34.644570 | orchestrator | PING 192.168.112.179 (192.168.112.179) 56(84) bytes of data. 2026-03-13 01:33:34.644686 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=1 ttl=63 time=7.98 ms 2026-03-13 01:33:35.639864 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=2 ttl=63 time=2.05 ms 2026-03-13 01:33:36.641555 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=3 ttl=63 time=1.89 ms 2026-03-13 01:33:36.641645 | orchestrator | 2026-03-13 01:33:36.641653 | orchestrator | --- 192.168.112.179 ping statistics --- 2026-03-13 01:33:36.641660 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-03-13 01:33:36.641665 | orchestrator | rtt min/avg/max/mdev = 1.889/3.970/7.978/2.834 ms 2026-03-13 01:33:36.641670 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-13 01:33:36.641674 | orchestrator | + ping -c3 192.168.112.133 2026-03-13 01:33:36.650814 | orchestrator | PING 192.168.112.133 (192.168.112.133) 56(84) bytes of data. 2026-03-13 01:33:36.650877 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=1 ttl=63 time=4.94 ms 2026-03-13 01:33:37.650411 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=2 ttl=63 time=2.41 ms 2026-03-13 01:33:38.651861 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=3 ttl=63 time=1.74 ms 2026-03-13 01:33:38.651928 | orchestrator | 2026-03-13 01:33:38.651936 | orchestrator | --- 192.168.112.133 ping statistics --- 2026-03-13 01:33:38.651941 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-03-13 01:33:38.651945 | orchestrator | rtt min/avg/max/mdev = 1.743/3.031/4.938/1.375 ms 2026-03-13 01:33:38.652156 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-13 01:33:38.652166 | orchestrator | + ping -c3 192.168.112.108 2026-03-13 01:33:38.661568 | orchestrator | PING 192.168.112.108 (192.168.112.108) 56(84) bytes of data. 2026-03-13 01:33:38.661666 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=1 ttl=63 time=5.49 ms 2026-03-13 01:33:39.660330 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=2 ttl=63 time=2.41 ms 2026-03-13 01:33:40.661138 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=3 ttl=63 time=1.43 ms 2026-03-13 01:33:40.661558 | orchestrator | 2026-03-13 01:33:40.661571 | orchestrator | --- 192.168.112.108 ping statistics --- 2026-03-13 01:33:40.661578 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-03-13 01:33:40.661582 | orchestrator | rtt min/avg/max/mdev = 1.427/3.109/5.491/1.731 ms 2026-03-13 01:33:40.662267 | orchestrator | + osism manage compute migrate --yes --target testbed-node-5 testbed-node-4 2026-03-13 01:33:42.472654 | orchestrator | 2026-03-13 01:33:42 | ERROR  | Unable to get ansible vault password 2026-03-13 01:33:42.472773 | orchestrator | 2026-03-13 01:33:42 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-13 01:33:42.472799 | orchestrator | 2026-03-13 01:33:42 | ERROR  | Dropping encrypted entries 2026-03-13 01:33:43.747082 | orchestrator | 2026-03-13 01:33:43 | INFO  | Live migrating server 140b68c4-9b8a-4f3f-9230-5cbf7c98134a 2026-03-13 01:33:53.691171 | orchestrator | 2026-03-13 01:33:53 | INFO  | Live migration of 140b68c4-9b8a-4f3f-9230-5cbf7c98134a (test-4) is still in progress 2026-03-13 01:33:56.028366 | orchestrator | 2026-03-13 01:33:56 | INFO  | Live migration of 140b68c4-9b8a-4f3f-9230-5cbf7c98134a (test-4) is still in progress 2026-03-13 01:33:58.380021 | orchestrator | 2026-03-13 01:33:58 | INFO  | Live migration of 140b68c4-9b8a-4f3f-9230-5cbf7c98134a (test-4) is still in progress 2026-03-13 01:34:00.588991 | orchestrator | 2026-03-13 01:34:00 | INFO  | Live migration of 140b68c4-9b8a-4f3f-9230-5cbf7c98134a (test-4) is still in progress 2026-03-13 01:34:02.850254 | orchestrator | 2026-03-13 01:34:02 | INFO  | Live migration of 140b68c4-9b8a-4f3f-9230-5cbf7c98134a (test-4) is still in progress 2026-03-13 01:34:05.164403 | orchestrator | 2026-03-13 01:34:05 | INFO  | Live migration of 140b68c4-9b8a-4f3f-9230-5cbf7c98134a (test-4) is still in progress 2026-03-13 01:34:07.480959 | orchestrator | 2026-03-13 01:34:07 | INFO  | Live migration of 140b68c4-9b8a-4f3f-9230-5cbf7c98134a (test-4) is still in progress 2026-03-13 01:34:09.726441 | orchestrator | 2026-03-13 01:34:09 | INFO  | Live migration of 140b68c4-9b8a-4f3f-9230-5cbf7c98134a (test-4) is still in progress 2026-03-13 01:34:12.007205 | orchestrator | 2026-03-13 01:34:12 | INFO  | Live migration of 140b68c4-9b8a-4f3f-9230-5cbf7c98134a (test-4) completed with status ACTIVE 2026-03-13 01:34:12.007294 | orchestrator | 2026-03-13 01:34:12 | INFO  | Live migrating server 1bf3b533-6c02-4133-8c7d-8c36e59e3fdd 2026-03-13 01:34:21.390900 | orchestrator | 2026-03-13 01:34:21 | INFO  | Live migration of 1bf3b533-6c02-4133-8c7d-8c36e59e3fdd (test-3) is still in progress 2026-03-13 01:34:23.730267 | orchestrator | 2026-03-13 01:34:23 | INFO  | Live migration of 1bf3b533-6c02-4133-8c7d-8c36e59e3fdd (test-3) is still in progress 2026-03-13 01:34:26.021411 | orchestrator | 2026-03-13 01:34:26 | INFO  | Live migration of 1bf3b533-6c02-4133-8c7d-8c36e59e3fdd (test-3) is still in progress 2026-03-13 01:34:28.285399 | orchestrator | 2026-03-13 01:34:28 | INFO  | Live migration of 1bf3b533-6c02-4133-8c7d-8c36e59e3fdd (test-3) is still in progress 2026-03-13 01:34:30.595816 | orchestrator | 2026-03-13 01:34:30 | INFO  | Live migration of 1bf3b533-6c02-4133-8c7d-8c36e59e3fdd (test-3) is still in progress 2026-03-13 01:34:32.799479 | orchestrator | 2026-03-13 01:34:32 | INFO  | Live migration of 1bf3b533-6c02-4133-8c7d-8c36e59e3fdd (test-3) is still in progress 2026-03-13 01:34:35.086717 | orchestrator | 2026-03-13 01:34:35 | INFO  | Live migration of 1bf3b533-6c02-4133-8c7d-8c36e59e3fdd (test-3) is still in progress 2026-03-13 01:34:37.357002 | orchestrator | 2026-03-13 01:34:37 | INFO  | Live migration of 1bf3b533-6c02-4133-8c7d-8c36e59e3fdd (test-3) is still in progress 2026-03-13 01:34:39.611875 | orchestrator | 2026-03-13 01:34:39 | INFO  | Live migration of 1bf3b533-6c02-4133-8c7d-8c36e59e3fdd (test-3) completed with status ACTIVE 2026-03-13 01:34:39.611951 | orchestrator | 2026-03-13 01:34:39 | INFO  | Live migrating server 5db10de1-c950-4c0c-a93f-3bb6e055c017 2026-03-13 01:34:49.288852 | orchestrator | 2026-03-13 01:34:49 | INFO  | Live migration of 5db10de1-c950-4c0c-a93f-3bb6e055c017 (test-1) is still in progress 2026-03-13 01:34:51.538728 | orchestrator | 2026-03-13 01:34:51 | INFO  | Live migration of 5db10de1-c950-4c0c-a93f-3bb6e055c017 (test-1) is still in progress 2026-03-13 01:34:53.872096 | orchestrator | 2026-03-13 01:34:53 | INFO  | Live migration of 5db10de1-c950-4c0c-a93f-3bb6e055c017 (test-1) is still in progress 2026-03-13 01:34:56.159424 | orchestrator | 2026-03-13 01:34:56 | INFO  | Live migration of 5db10de1-c950-4c0c-a93f-3bb6e055c017 (test-1) is still in progress 2026-03-13 01:34:58.474411 | orchestrator | 2026-03-13 01:34:58 | INFO  | Live migration of 5db10de1-c950-4c0c-a93f-3bb6e055c017 (test-1) is still in progress 2026-03-13 01:35:00.767260 | orchestrator | 2026-03-13 01:35:00 | INFO  | Live migration of 5db10de1-c950-4c0c-a93f-3bb6e055c017 (test-1) is still in progress 2026-03-13 01:35:03.098498 | orchestrator | 2026-03-13 01:35:03 | INFO  | Live migration of 5db10de1-c950-4c0c-a93f-3bb6e055c017 (test-1) is still in progress 2026-03-13 01:35:05.376660 | orchestrator | 2026-03-13 01:35:05 | INFO  | Live migration of 5db10de1-c950-4c0c-a93f-3bb6e055c017 (test-1) is still in progress 2026-03-13 01:35:07.745152 | orchestrator | 2026-03-13 01:35:07 | INFO  | Live migration of 5db10de1-c950-4c0c-a93f-3bb6e055c017 (test-1) completed with status ACTIVE 2026-03-13 01:35:07.745215 | orchestrator | 2026-03-13 01:35:07 | INFO  | Live migrating server d2ae0d1f-6a2d-46e5-97f4-b12d225e89bf 2026-03-13 01:35:17.883785 | orchestrator | 2026-03-13 01:35:17 | INFO  | Live migration of d2ae0d1f-6a2d-46e5-97f4-b12d225e89bf (test-2) is still in progress 2026-03-13 01:35:20.234131 | orchestrator | 2026-03-13 01:35:20 | INFO  | Live migration of d2ae0d1f-6a2d-46e5-97f4-b12d225e89bf (test-2) is still in progress 2026-03-13 01:35:22.542919 | orchestrator | 2026-03-13 01:35:22 | INFO  | Live migration of d2ae0d1f-6a2d-46e5-97f4-b12d225e89bf (test-2) is still in progress 2026-03-13 01:35:24.846233 | orchestrator | 2026-03-13 01:35:24 | INFO  | Live migration of d2ae0d1f-6a2d-46e5-97f4-b12d225e89bf (test-2) is still in progress 2026-03-13 01:35:27.114003 | orchestrator | 2026-03-13 01:35:27 | INFO  | Live migration of d2ae0d1f-6a2d-46e5-97f4-b12d225e89bf (test-2) is still in progress 2026-03-13 01:35:29.342427 | orchestrator | 2026-03-13 01:35:29 | INFO  | Live migration of d2ae0d1f-6a2d-46e5-97f4-b12d225e89bf (test-2) is still in progress 2026-03-13 01:35:31.624390 | orchestrator | 2026-03-13 01:35:31 | INFO  | Live migration of d2ae0d1f-6a2d-46e5-97f4-b12d225e89bf (test-2) is still in progress 2026-03-13 01:35:33.990264 | orchestrator | 2026-03-13 01:35:33 | INFO  | Live migration of d2ae0d1f-6a2d-46e5-97f4-b12d225e89bf (test-2) is still in progress 2026-03-13 01:35:36.401135 | orchestrator | 2026-03-13 01:35:36 | INFO  | Live migration of d2ae0d1f-6a2d-46e5-97f4-b12d225e89bf (test-2) completed with status ACTIVE 2026-03-13 01:35:36.401222 | orchestrator | 2026-03-13 01:35:36 | INFO  | Live migrating server 27168077-d802-4639-bb7f-7e68b59b2281 2026-03-13 01:35:46.017870 | orchestrator | 2026-03-13 01:35:46 | INFO  | Live migration of 27168077-d802-4639-bb7f-7e68b59b2281 (test) is still in progress 2026-03-13 01:35:48.263546 | orchestrator | 2026-03-13 01:35:48 | INFO  | Live migration of 27168077-d802-4639-bb7f-7e68b59b2281 (test) is still in progress 2026-03-13 01:35:50.541987 | orchestrator | 2026-03-13 01:35:50 | INFO  | Live migration of 27168077-d802-4639-bb7f-7e68b59b2281 (test) is still in progress 2026-03-13 01:35:52.896961 | orchestrator | 2026-03-13 01:35:52 | INFO  | Live migration of 27168077-d802-4639-bb7f-7e68b59b2281 (test) is still in progress 2026-03-13 01:35:55.218239 | orchestrator | 2026-03-13 01:35:55 | INFO  | Live migration of 27168077-d802-4639-bb7f-7e68b59b2281 (test) is still in progress 2026-03-13 01:35:57.583879 | orchestrator | 2026-03-13 01:35:57 | INFO  | Live migration of 27168077-d802-4639-bb7f-7e68b59b2281 (test) is still in progress 2026-03-13 01:35:59.848150 | orchestrator | 2026-03-13 01:35:59 | INFO  | Live migration of 27168077-d802-4639-bb7f-7e68b59b2281 (test) is still in progress 2026-03-13 01:36:02.046126 | orchestrator | 2026-03-13 01:36:02 | INFO  | Live migration of 27168077-d802-4639-bb7f-7e68b59b2281 (test) is still in progress 2026-03-13 01:36:04.327926 | orchestrator | 2026-03-13 01:36:04 | INFO  | Live migration of 27168077-d802-4639-bb7f-7e68b59b2281 (test) is still in progress 2026-03-13 01:36:06.804158 | orchestrator | 2026-03-13 01:36:06 | INFO  | Live migration of 27168077-d802-4639-bb7f-7e68b59b2281 (test) completed with status ACTIVE 2026-03-13 01:36:07.093486 | orchestrator | + compute_list 2026-03-13 01:36:07.093643 | orchestrator | + osism manage compute list testbed-node-3 2026-03-13 01:36:09.083417 | orchestrator | 2026-03-13 01:36:09 | ERROR  | Unable to get ansible vault password 2026-03-13 01:36:09.083502 | orchestrator | 2026-03-13 01:36:09 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-13 01:36:09.083514 | orchestrator | 2026-03-13 01:36:09 | ERROR  | Dropping encrypted entries 2026-03-13 01:36:09.830963 | orchestrator | +------+--------+----------+ 2026-03-13 01:36:09.831006 | orchestrator | | ID | Name | Status | 2026-03-13 01:36:09.831012 | orchestrator | |------+--------+----------| 2026-03-13 01:36:09.831016 | orchestrator | +------+--------+----------+ 2026-03-13 01:36:10.159988 | orchestrator | + osism manage compute list testbed-node-4 2026-03-13 01:36:12.178589 | orchestrator | 2026-03-13 01:36:12 | ERROR  | Unable to get ansible vault password 2026-03-13 01:36:12.178659 | orchestrator | 2026-03-13 01:36:12 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-13 01:36:12.178682 | orchestrator | 2026-03-13 01:36:12 | ERROR  | Dropping encrypted entries 2026-03-13 01:36:12.973231 | orchestrator | +------+--------+----------+ 2026-03-13 01:36:12.973297 | orchestrator | | ID | Name | Status | 2026-03-13 01:36:12.973303 | orchestrator | |------+--------+----------| 2026-03-13 01:36:12.973307 | orchestrator | +------+--------+----------+ 2026-03-13 01:36:13.257894 | orchestrator | + osism manage compute list testbed-node-5 2026-03-13 01:36:15.276019 | orchestrator | 2026-03-13 01:36:15 | ERROR  | Unable to get ansible vault password 2026-03-13 01:36:15.276100 | orchestrator | 2026-03-13 01:36:15 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-13 01:36:15.276130 | orchestrator | 2026-03-13 01:36:15 | ERROR  | Dropping encrypted entries 2026-03-13 01:36:16.670928 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-13 01:36:16.671002 | orchestrator | | ID | Name | Status | 2026-03-13 01:36:16.671008 | orchestrator | |--------------------------------------+--------+----------| 2026-03-13 01:36:16.671013 | orchestrator | | 140b68c4-9b8a-4f3f-9230-5cbf7c98134a | test-4 | ACTIVE | 2026-03-13 01:36:16.671017 | orchestrator | | 1bf3b533-6c02-4133-8c7d-8c36e59e3fdd | test-3 | ACTIVE | 2026-03-13 01:36:16.671021 | orchestrator | | 5db10de1-c950-4c0c-a93f-3bb6e055c017 | test-1 | ACTIVE | 2026-03-13 01:36:16.671025 | orchestrator | | d2ae0d1f-6a2d-46e5-97f4-b12d225e89bf | test-2 | ACTIVE | 2026-03-13 01:36:16.671029 | orchestrator | | 27168077-d802-4639-bb7f-7e68b59b2281 | test | ACTIVE | 2026-03-13 01:36:16.671033 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-13 01:36:16.985860 | orchestrator | + server_ping 2026-03-13 01:36:16.986275 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-03-13 01:36:16.986419 | orchestrator | ++ tr -d '\r' 2026-03-13 01:36:19.950956 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-13 01:36:19.951034 | orchestrator | + ping -c3 192.168.112.162 2026-03-13 01:36:19.962397 | orchestrator | PING 192.168.112.162 (192.168.112.162) 56(84) bytes of data. 2026-03-13 01:36:19.962473 | orchestrator | 64 bytes from 192.168.112.162: icmp_seq=1 ttl=63 time=7.54 ms 2026-03-13 01:36:20.959153 | orchestrator | 64 bytes from 192.168.112.162: icmp_seq=2 ttl=63 time=2.33 ms 2026-03-13 01:36:21.960839 | orchestrator | 64 bytes from 192.168.112.162: icmp_seq=3 ttl=63 time=1.87 ms 2026-03-13 01:36:21.960945 | orchestrator | 2026-03-13 01:36:21.960956 | orchestrator | --- 192.168.112.162 ping statistics --- 2026-03-13 01:36:21.960965 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-13 01:36:21.960973 | orchestrator | rtt min/avg/max/mdev = 1.865/3.911/7.536/2.569 ms 2026-03-13 01:36:21.961588 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-13 01:36:21.961667 | orchestrator | + ping -c3 192.168.112.112 2026-03-13 01:36:21.971266 | orchestrator | PING 192.168.112.112 (192.168.112.112) 56(84) bytes of data. 2026-03-13 01:36:21.971362 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=1 ttl=63 time=5.61 ms 2026-03-13 01:36:22.969625 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=2 ttl=63 time=1.56 ms 2026-03-13 01:36:23.970687 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=3 ttl=63 time=1.21 ms 2026-03-13 01:36:23.970742 | orchestrator | 2026-03-13 01:36:23.970748 | orchestrator | --- 192.168.112.112 ping statistics --- 2026-03-13 01:36:23.970753 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-03-13 01:36:23.970757 | orchestrator | rtt min/avg/max/mdev = 1.208/2.793/5.609/1.996 ms 2026-03-13 01:36:23.971819 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-13 01:36:23.971855 | orchestrator | + ping -c3 192.168.112.179 2026-03-13 01:36:23.978906 | orchestrator | PING 192.168.112.179 (192.168.112.179) 56(84) bytes of data. 2026-03-13 01:36:23.978961 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=1 ttl=63 time=4.89 ms 2026-03-13 01:36:24.979168 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=2 ttl=63 time=2.13 ms 2026-03-13 01:36:25.980146 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=3 ttl=63 time=1.77 ms 2026-03-13 01:36:25.980217 | orchestrator | 2026-03-13 01:36:25.980224 | orchestrator | --- 192.168.112.179 ping statistics --- 2026-03-13 01:36:25.980229 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-13 01:36:25.980234 | orchestrator | rtt min/avg/max/mdev = 1.774/2.931/4.893/1.394 ms 2026-03-13 01:36:25.980854 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-13 01:36:25.980890 | orchestrator | + ping -c3 192.168.112.133 2026-03-13 01:36:25.990095 | orchestrator | PING 192.168.112.133 (192.168.112.133) 56(84) bytes of data. 2026-03-13 01:36:25.990171 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=1 ttl=63 time=4.94 ms 2026-03-13 01:36:26.988737 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=2 ttl=63 time=2.33 ms 2026-03-13 01:36:27.988921 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=3 ttl=63 time=1.17 ms 2026-03-13 01:36:27.988970 | orchestrator | 2026-03-13 01:36:27.988976 | orchestrator | --- 192.168.112.133 ping statistics --- 2026-03-13 01:36:27.988981 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-13 01:36:27.988985 | orchestrator | rtt min/avg/max/mdev = 1.171/2.814/4.941/1.576 ms 2026-03-13 01:36:27.989733 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-13 01:36:27.989755 | orchestrator | + ping -c3 192.168.112.108 2026-03-13 01:36:27.997790 | orchestrator | PING 192.168.112.108 (192.168.112.108) 56(84) bytes of data. 2026-03-13 01:36:27.997852 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=1 ttl=63 time=3.99 ms 2026-03-13 01:36:28.997174 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=2 ttl=63 time=2.10 ms 2026-03-13 01:36:29.998292 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=3 ttl=63 time=1.65 ms 2026-03-13 01:36:29.998344 | orchestrator | 2026-03-13 01:36:29.998351 | orchestrator | --- 192.168.112.108 ping statistics --- 2026-03-13 01:36:29.998358 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-13 01:36:29.998363 | orchestrator | rtt min/avg/max/mdev = 1.647/2.578/3.989/1.014 ms 2026-03-13 01:36:30.409285 | orchestrator | ok: Runtime: 0:17:37.431428 2026-03-13 01:36:30.471882 | 2026-03-13 01:36:30.472073 | TASK [Run tempest] 2026-03-13 01:36:31.161751 | orchestrator | + set -e 2026-03-13 01:36:31.161898 | orchestrator | + source /opt/manager-vars.sh 2026-03-13 01:36:31.162242 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-13 01:36:31.162279 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-13 01:36:31.162288 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-13 01:36:31.162296 | orchestrator | ++ CEPH_VERSION=reef 2026-03-13 01:36:31.162304 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-13 01:36:31.162330 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-13 01:36:31.162349 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-13 01:36:31.162361 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-13 01:36:31.162368 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-13 01:36:31.162379 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-13 01:36:31.162386 | orchestrator | ++ export ARA=false 2026-03-13 01:36:31.162393 | orchestrator | ++ ARA=false 2026-03-13 01:36:31.162402 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-13 01:36:31.162408 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-13 01:36:31.162414 | orchestrator | ++ export TEMPEST=true 2026-03-13 01:36:31.162423 | orchestrator | ++ TEMPEST=true 2026-03-13 01:36:31.162431 | orchestrator | ++ export IS_ZUUL=true 2026-03-13 01:36:31.162437 | orchestrator | ++ IS_ZUUL=true 2026-03-13 01:36:31.162444 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.64 2026-03-13 01:36:31.162451 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.64 2026-03-13 01:36:31.162457 | orchestrator | ++ export EXTERNAL_API=false 2026-03-13 01:36:31.162463 | orchestrator | ++ EXTERNAL_API=false 2026-03-13 01:36:31.162470 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-13 01:36:31.162475 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-13 01:36:31.162484 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-13 01:36:31.162492 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-13 01:36:31.162496 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-13 01:36:31.162500 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-13 01:36:31.162519 | orchestrator | + echo 2026-03-13 01:36:31.162524 | orchestrator | 2026-03-13 01:36:31.162528 | orchestrator | # Tempest 2026-03-13 01:36:31.162532 | orchestrator | 2026-03-13 01:36:31.162536 | orchestrator | + echo '# Tempest' 2026-03-13 01:36:31.162539 | orchestrator | + echo 2026-03-13 01:36:31.162543 | orchestrator | + [[ ! -e /opt/tempest ]] 2026-03-13 01:36:31.162547 | orchestrator | + osism apply tempest --skip-tags run-tempest 2026-03-13 01:36:43.258365 | orchestrator | 2026-03-13 01:36:43 | INFO  | Prepare task for execution of tempest. 2026-03-13 01:36:43.340810 | orchestrator | 2026-03-13 01:36:43 | INFO  | Task e8ebc5dc-45a6-480e-b322-d5ae569258b8 (tempest) was prepared for execution. 2026-03-13 01:36:43.340878 | orchestrator | 2026-03-13 01:36:43 | INFO  | It takes a moment until task e8ebc5dc-45a6-480e-b322-d5ae569258b8 (tempest) has been started and output is visible here. 2026-03-13 01:37:58.274121 | orchestrator | 2026-03-13 01:37:58.274219 | orchestrator | PLAY [Run tempest] ************************************************************* 2026-03-13 01:37:58.274233 | orchestrator | 2026-03-13 01:37:58.274242 | orchestrator | TASK [osism.validations.tempest : Create tempest workdir] ********************** 2026-03-13 01:37:58.274265 | orchestrator | Friday 13 March 2026 01:36:47 +0000 (0:00:00.234) 0:00:00.234 ********** 2026-03-13 01:37:58.274275 | orchestrator | changed: [testbed-manager] 2026-03-13 01:37:58.274285 | orchestrator | 2026-03-13 01:37:58.274294 | orchestrator | TASK [osism.validations.tempest : Copy tempest wrapper script] ***************** 2026-03-13 01:37:58.274303 | orchestrator | Friday 13 March 2026 01:36:48 +0000 (0:00:00.704) 0:00:00.939 ********** 2026-03-13 01:37:58.274312 | orchestrator | changed: [testbed-manager] 2026-03-13 01:37:58.274321 | orchestrator | 2026-03-13 01:37:58.274357 | orchestrator | TASK [osism.validations.tempest : Check for existing tempest initialisation] *** 2026-03-13 01:37:58.274367 | orchestrator | Friday 13 March 2026 01:36:49 +0000 (0:00:01.191) 0:00:02.131 ********** 2026-03-13 01:37:58.274376 | orchestrator | ok: [testbed-manager] 2026-03-13 01:37:58.274385 | orchestrator | 2026-03-13 01:37:58.274395 | orchestrator | TASK [osism.validations.tempest : Init tempest] ******************************** 2026-03-13 01:37:58.274404 | orchestrator | Friday 13 March 2026 01:36:49 +0000 (0:00:00.415) 0:00:02.546 ********** 2026-03-13 01:37:58.274413 | orchestrator | changed: [testbed-manager] 2026-03-13 01:37:58.274423 | orchestrator | 2026-03-13 01:37:58.274432 | orchestrator | TASK [osism.validations.tempest : Resolve image IDs] *************************** 2026-03-13 01:37:58.274442 | orchestrator | Friday 13 March 2026 01:37:10 +0000 (0:00:20.726) 0:00:23.272 ********** 2026-03-13 01:37:58.274556 | orchestrator | ok: [testbed-manager -> localhost] => (item=Cirros 0.6.3) 2026-03-13 01:37:58.274569 | orchestrator | ok: [testbed-manager -> localhost] => (item=Cirros 0.6.2) 2026-03-13 01:37:58.274581 | orchestrator | 2026-03-13 01:37:58.274591 | orchestrator | TASK [osism.validations.tempest : Assert images have been resolved] ************ 2026-03-13 01:37:58.274599 | orchestrator | Friday 13 March 2026 01:37:18 +0000 (0:00:07.729) 0:00:31.001 ********** 2026-03-13 01:37:58.274608 | orchestrator | ok: [testbed-manager] => { 2026-03-13 01:37:58.274617 | orchestrator |  "changed": false, 2026-03-13 01:37:58.274626 | orchestrator |  "msg": "All assertions passed" 2026-03-13 01:37:58.274635 | orchestrator | } 2026-03-13 01:37:58.274644 | orchestrator | 2026-03-13 01:37:58.274653 | orchestrator | TASK [osism.validations.tempest : Get auth token] ****************************** 2026-03-13 01:37:58.274662 | orchestrator | Friday 13 March 2026 01:37:18 +0000 (0:00:00.155) 0:00:31.157 ********** 2026-03-13 01:37:58.274670 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-13 01:37:58.274679 | orchestrator | 2026-03-13 01:37:58.274687 | orchestrator | TASK [osism.validations.tempest : Get endpoint catalog] ************************ 2026-03-13 01:37:58.274696 | orchestrator | Friday 13 March 2026 01:37:22 +0000 (0:00:03.464) 0:00:34.622 ********** 2026-03-13 01:37:58.274723 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-13 01:37:58.274745 | orchestrator | 2026-03-13 01:37:58.274764 | orchestrator | TASK [osism.validations.tempest : Get service catalog] ************************* 2026-03-13 01:37:58.274780 | orchestrator | Friday 13 March 2026 01:37:23 +0000 (0:00:01.741) 0:00:36.364 ********** 2026-03-13 01:37:58.274797 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-13 01:37:58.274813 | orchestrator | 2026-03-13 01:37:58.274827 | orchestrator | TASK [osism.validations.tempest : Register img_file name] ********************** 2026-03-13 01:37:58.274842 | orchestrator | Friday 13 March 2026 01:37:27 +0000 (0:00:03.507) 0:00:39.872 ********** 2026-03-13 01:37:58.274856 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-13 01:37:58.274871 | orchestrator | 2026-03-13 01:37:58.274888 | orchestrator | TASK [osism.validations.tempest : Download img_file from image_ref] ************ 2026-03-13 01:37:58.274905 | orchestrator | Friday 13 March 2026 01:37:27 +0000 (0:00:00.211) 0:00:40.083 ********** 2026-03-13 01:37:58.274921 | orchestrator | changed: [testbed-manager] 2026-03-13 01:37:58.274938 | orchestrator | 2026-03-13 01:37:58.274955 | orchestrator | TASK [osism.validations.tempest : Install qemu-utils package] ****************** 2026-03-13 01:37:58.274971 | orchestrator | Friday 13 March 2026 01:37:30 +0000 (0:00:02.947) 0:00:43.031 ********** 2026-03-13 01:37:58.274988 | orchestrator | changed: [testbed-manager] 2026-03-13 01:37:58.275004 | orchestrator | 2026-03-13 01:37:58.275019 | orchestrator | TASK [osism.validations.tempest : Convert img_file to qcow2 format] ************ 2026-03-13 01:37:58.275037 | orchestrator | Friday 13 March 2026 01:37:39 +0000 (0:00:08.859) 0:00:51.891 ********** 2026-03-13 01:37:58.275053 | orchestrator | changed: [testbed-manager] 2026-03-13 01:37:58.275070 | orchestrator | 2026-03-13 01:37:58.275083 | orchestrator | TASK [osism.validations.tempest : Get network API extensions] ****************** 2026-03-13 01:37:58.275092 | orchestrator | Friday 13 March 2026 01:37:40 +0000 (0:00:00.735) 0:00:52.627 ********** 2026-03-13 01:37:58.275101 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-13 01:37:58.275109 | orchestrator | 2026-03-13 01:37:58.275118 | orchestrator | TASK [osism.validations.tempest : Revoke token] ******************************** 2026-03-13 01:37:58.275127 | orchestrator | Friday 13 March 2026 01:37:41 +0000 (0:00:01.518) 0:00:54.145 ********** 2026-03-13 01:37:58.275136 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-13 01:37:58.275144 | orchestrator | 2026-03-13 01:37:58.275153 | orchestrator | TASK [osism.validations.tempest : Set fact for config option api_extensions] *** 2026-03-13 01:37:58.275161 | orchestrator | Friday 13 March 2026 01:37:43 +0000 (0:00:01.608) 0:00:55.753 ********** 2026-03-13 01:37:58.275170 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-13 01:37:58.275178 | orchestrator | 2026-03-13 01:37:58.275187 | orchestrator | TASK [osism.validations.tempest : Set fact for config option img_file] ********* 2026-03-13 01:37:58.275210 | orchestrator | Friday 13 March 2026 01:37:43 +0000 (0:00:00.186) 0:00:55.940 ********** 2026-03-13 01:37:58.275224 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-13 01:37:58.275237 | orchestrator | 2026-03-13 01:37:58.275264 | orchestrator | TASK [osism.validations.tempest : Resolve floating network ID] ***************** 2026-03-13 01:37:58.275279 | orchestrator | Friday 13 March 2026 01:37:43 +0000 (0:00:00.186) 0:00:56.126 ********** 2026-03-13 01:37:58.275293 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-13 01:37:58.275308 | orchestrator | 2026-03-13 01:37:58.275323 | orchestrator | TASK [osism.validations.tempest : Assert floating network id has been resolved] *** 2026-03-13 01:37:58.275362 | orchestrator | Friday 13 March 2026 01:37:47 +0000 (0:00:03.702) 0:00:59.829 ********** 2026-03-13 01:37:58.275373 | orchestrator | ok: [testbed-manager -> localhost] => { 2026-03-13 01:37:58.275382 | orchestrator |  "changed": false, 2026-03-13 01:37:58.275390 | orchestrator |  "msg": "All assertions passed" 2026-03-13 01:37:58.275399 | orchestrator | } 2026-03-13 01:37:58.275407 | orchestrator | 2026-03-13 01:37:58.275417 | orchestrator | TASK [osism.validations.tempest : Resolve flavor IDs] ************************** 2026-03-13 01:37:58.275426 | orchestrator | Friday 13 March 2026 01:37:47 +0000 (0:00:00.178) 0:01:00.007 ********** 2026-03-13 01:37:58.275435 | orchestrator | skipping: [testbed-manager] => (item={'name': 'tempest-1', 'vcpus': 1, 'ram': 1024, 'disk': 1})  2026-03-13 01:37:58.275446 | orchestrator | skipping: [testbed-manager] => (item={'name': 'tempest-2', 'vcpus': 2, 'ram': 2048, 'disk': 2})  2026-03-13 01:37:58.275482 | orchestrator | skipping: [testbed-manager] 2026-03-13 01:37:58.275492 | orchestrator | 2026-03-13 01:37:58.275505 | orchestrator | TASK [osism.validations.tempest : Assert flavors have been resolved] *********** 2026-03-13 01:37:58.275519 | orchestrator | Friday 13 March 2026 01:37:47 +0000 (0:00:00.375) 0:01:00.383 ********** 2026-03-13 01:37:58.275534 | orchestrator | skipping: [testbed-manager] 2026-03-13 01:37:58.275548 | orchestrator | 2026-03-13 01:37:58.275563 | orchestrator | TASK [osism.validations.tempest : Get stats of exclude list] ******************* 2026-03-13 01:37:58.275577 | orchestrator | Friday 13 March 2026 01:37:47 +0000 (0:00:00.156) 0:01:00.540 ********** 2026-03-13 01:37:58.275587 | orchestrator | ok: [testbed-manager] 2026-03-13 01:37:58.275595 | orchestrator | 2026-03-13 01:37:58.275604 | orchestrator | TASK [osism.validations.tempest : Copy exclude list] *************************** 2026-03-13 01:37:58.275612 | orchestrator | Friday 13 March 2026 01:37:48 +0000 (0:00:00.457) 0:01:00.997 ********** 2026-03-13 01:37:58.275621 | orchestrator | changed: [testbed-manager] 2026-03-13 01:37:58.275629 | orchestrator | 2026-03-13 01:37:58.275638 | orchestrator | TASK [osism.validations.tempest : Get stats of include list] ******************* 2026-03-13 01:37:58.275647 | orchestrator | Friday 13 March 2026 01:37:49 +0000 (0:00:00.896) 0:01:01.894 ********** 2026-03-13 01:37:58.275656 | orchestrator | ok: [testbed-manager] 2026-03-13 01:37:58.275664 | orchestrator | 2026-03-13 01:37:58.275673 | orchestrator | TASK [osism.validations.tempest : Copy include list] *************************** 2026-03-13 01:37:58.275681 | orchestrator | Friday 13 March 2026 01:37:49 +0000 (0:00:00.414) 0:01:02.308 ********** 2026-03-13 01:37:58.275690 | orchestrator | skipping: [testbed-manager] 2026-03-13 01:37:58.275698 | orchestrator | 2026-03-13 01:37:58.275707 | orchestrator | TASK [osism.validations.tempest : Create tempest flavors] ********************** 2026-03-13 01:37:58.275715 | orchestrator | Friday 13 March 2026 01:37:49 +0000 (0:00:00.145) 0:01:02.454 ********** 2026-03-13 01:37:58.275724 | orchestrator | changed: [testbed-manager -> localhost] => (item={'name': 'tempest-1', 'vcpus': 1, 'ram': 1024, 'disk': 1}) 2026-03-13 01:37:58.275734 | orchestrator | changed: [testbed-manager -> localhost] => (item={'name': 'tempest-2', 'vcpus': 2, 'ram': 2048, 'disk': 2}) 2026-03-13 01:37:58.275742 | orchestrator | 2026-03-13 01:37:58.275751 | orchestrator | TASK [osism.validations.tempest : Copy tempest.conf file] ********************** 2026-03-13 01:37:58.275759 | orchestrator | Friday 13 March 2026 01:37:57 +0000 (0:00:07.429) 0:01:09.883 ********** 2026-03-13 01:37:58.275767 | orchestrator | changed: [testbed-manager] 2026-03-13 01:37:58.275776 | orchestrator | 2026-03-13 01:37:58.275792 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-13 01:37:58.275802 | orchestrator | testbed-manager : ok=24  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-13 01:37:58.275812 | orchestrator | 2026-03-13 01:37:58.275820 | orchestrator | 2026-03-13 01:37:58.275829 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-13 01:37:58.275837 | orchestrator | Friday 13 March 2026 01:37:58 +0000 (0:00:00.978) 0:01:10.861 ********** 2026-03-13 01:37:58.275852 | orchestrator | =============================================================================== 2026-03-13 01:37:58.275866 | orchestrator | osism.validations.tempest : Init tempest ------------------------------- 20.73s 2026-03-13 01:37:58.275880 | orchestrator | osism.validations.tempest : Install qemu-utils package ------------------ 8.86s 2026-03-13 01:37:58.275894 | orchestrator | osism.validations.tempest : Resolve image IDs --------------------------- 7.73s 2026-03-13 01:37:58.275908 | orchestrator | osism.validations.tempest : Create tempest flavors ---------------------- 7.43s 2026-03-13 01:37:58.275931 | orchestrator | osism.validations.tempest : Resolve floating network ID ----------------- 3.70s 2026-03-13 01:37:58.275947 | orchestrator | osism.validations.tempest : Get service catalog ------------------------- 3.51s 2026-03-13 01:37:58.275962 | orchestrator | osism.validations.tempest : Get auth token ------------------------------ 3.46s 2026-03-13 01:37:58.275977 | orchestrator | osism.validations.tempest : Download img_file from image_ref ------------ 2.95s 2026-03-13 01:37:58.275991 | orchestrator | osism.validations.tempest : Get endpoint catalog ------------------------ 1.74s 2026-03-13 01:37:58.276007 | orchestrator | osism.validations.tempest : Revoke token -------------------------------- 1.61s 2026-03-13 01:37:58.276022 | orchestrator | osism.validations.tempest : Get network API extensions ------------------ 1.52s 2026-03-13 01:37:58.276037 | orchestrator | osism.validations.tempest : Copy tempest wrapper script ----------------- 1.19s 2026-03-13 01:37:58.276050 | orchestrator | osism.validations.tempest : Copy tempest.conf file ---------------------- 0.98s 2026-03-13 01:37:58.276064 | orchestrator | osism.validations.tempest : Copy exclude list --------------------------- 0.90s 2026-03-13 01:37:58.276073 | orchestrator | osism.validations.tempest : Convert img_file to qcow2 format ------------ 0.74s 2026-03-13 01:37:58.276081 | orchestrator | osism.validations.tempest : Create tempest workdir ---------------------- 0.70s 2026-03-13 01:37:58.276090 | orchestrator | osism.validations.tempest : Get stats of exclude list ------------------- 0.46s 2026-03-13 01:37:58.276108 | orchestrator | osism.validations.tempest : Check for existing tempest initialisation --- 0.42s 2026-03-13 01:37:58.653476 | orchestrator | osism.validations.tempest : Get stats of include list ------------------- 0.41s 2026-03-13 01:37:58.653556 | orchestrator | osism.validations.tempest : Resolve flavor IDs -------------------------- 0.38s 2026-03-13 01:37:58.937343 | orchestrator | + sed -i '/log_dir =/d' /opt/tempest/etc/tempest.conf 2026-03-13 01:37:58.939731 | orchestrator | + sed -i '/log_file =/d' /opt/tempest/etc/tempest.conf 2026-03-13 01:37:58.943839 | orchestrator | 2026-03-13 01:37:58.943908 | orchestrator | ## IDENTITY (API) 2026-03-13 01:37:58.943915 | orchestrator | 2026-03-13 01:37:58.943920 | orchestrator | + [[ false == \t\r\u\e ]] 2026-03-13 01:37:58.943925 | orchestrator | + echo 2026-03-13 01:37:58.943930 | orchestrator | + echo '## IDENTITY (API)' 2026-03-13 01:37:58.943934 | orchestrator | + echo 2026-03-13 01:37:58.943939 | orchestrator | + _tempest tempest.api.identity.v3 2026-03-13 01:37:58.943944 | orchestrator | + local regex=tempest.api.identity.v3 2026-03-13 01:37:58.944894 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.identity.v3 --concurrency 16 2026-03-13 01:37:58.946241 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-13 01:37:58.949983 | orchestrator | + tee -a /opt/tempest/20260313-0137.log 2026-03-13 01:38:02.765179 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.identity.v3 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-13 01:38:02.765324 | orchestrator | Did you mean one of these? 2026-03-13 01:38:02.765341 | orchestrator | help 2026-03-13 01:38:02.765349 | orchestrator | init 2026-03-13 01:38:03.147314 | orchestrator | 2026-03-13 01:38:03.147380 | orchestrator | ## IMAGE (API) 2026-03-13 01:38:03.147386 | orchestrator | 2026-03-13 01:38:03.147390 | orchestrator | + echo 2026-03-13 01:38:03.147394 | orchestrator | + echo '## IMAGE (API)' 2026-03-13 01:38:03.147399 | orchestrator | + echo 2026-03-13 01:38:03.147404 | orchestrator | + _tempest tempest.api.image.v2 2026-03-13 01:38:03.147408 | orchestrator | + local regex=tempest.api.image.v2 2026-03-13 01:38:03.147605 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.image.v2 --concurrency 16 2026-03-13 01:38:03.150249 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-13 01:38:03.157954 | orchestrator | + tee -a /opt/tempest/20260313-0138.log 2026-03-13 01:38:06.779386 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.image.v2 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-13 01:38:06.779488 | orchestrator | Did you mean one of these? 2026-03-13 01:38:06.779499 | orchestrator | help 2026-03-13 01:38:06.779506 | orchestrator | init 2026-03-13 01:38:07.134617 | orchestrator | 2026-03-13 01:38:07.134746 | orchestrator | ## NETWORK (API) 2026-03-13 01:38:07.134757 | orchestrator | 2026-03-13 01:38:07.134764 | orchestrator | + echo 2026-03-13 01:38:07.134770 | orchestrator | + echo '## NETWORK (API)' 2026-03-13 01:38:07.134777 | orchestrator | + echo 2026-03-13 01:38:07.134783 | orchestrator | + _tempest tempest.api.network 2026-03-13 01:38:07.134789 | orchestrator | + local regex=tempest.api.network 2026-03-13 01:38:07.134799 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.network --concurrency 16 2026-03-13 01:38:07.134864 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-13 01:38:07.136948 | orchestrator | + tee -a /opt/tempest/20260313-0138.log 2026-03-13 01:38:10.771851 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.network --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-13 01:38:10.771942 | orchestrator | Did you mean one of these? 2026-03-13 01:38:10.771953 | orchestrator | help 2026-03-13 01:38:10.771958 | orchestrator | init 2026-03-13 01:38:11.136392 | orchestrator | 2026-03-13 01:38:11.136541 | orchestrator | ## VOLUME (API) 2026-03-13 01:38:11.136556 | orchestrator | 2026-03-13 01:38:11.136563 | orchestrator | + echo 2026-03-13 01:38:11.136570 | orchestrator | + echo '## VOLUME (API)' 2026-03-13 01:38:11.136576 | orchestrator | + echo 2026-03-13 01:38:11.136582 | orchestrator | + _tempest tempest.api.volume 2026-03-13 01:38:11.136588 | orchestrator | + local regex=tempest.api.volume 2026-03-13 01:38:11.136681 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.volume --concurrency 16 2026-03-13 01:38:11.138207 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-13 01:38:11.140326 | orchestrator | + tee -a /opt/tempest/20260313-0138.log 2026-03-13 01:38:14.621318 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.volume --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-13 01:38:14.621396 | orchestrator | Did you mean one of these? 2026-03-13 01:38:14.621406 | orchestrator | help 2026-03-13 01:38:14.621414 | orchestrator | init 2026-03-13 01:38:14.883173 | orchestrator | 2026-03-13 01:38:14.883242 | orchestrator | ## COMPUTE (API) 2026-03-13 01:38:14.883254 | orchestrator | 2026-03-13 01:38:14.883261 | orchestrator | + echo 2026-03-13 01:38:14.883269 | orchestrator | + echo '## COMPUTE (API)' 2026-03-13 01:38:14.883276 | orchestrator | + echo 2026-03-13 01:38:14.883282 | orchestrator | + _tempest tempest.api.compute 2026-03-13 01:38:14.883314 | orchestrator | + local regex=tempest.api.compute 2026-03-13 01:38:14.883323 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.compute --concurrency 16 2026-03-13 01:38:14.883903 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-13 01:38:14.886964 | orchestrator | + tee -a /opt/tempest/20260313-0138.log 2026-03-13 01:38:18.111029 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.compute --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-13 01:38:18.111115 | orchestrator | Did you mean one of these? 2026-03-13 01:38:18.111126 | orchestrator | help 2026-03-13 01:38:18.111134 | orchestrator | init 2026-03-13 01:38:18.382802 | orchestrator | 2026-03-13 01:38:18.382860 | orchestrator | ## DNS (API) 2026-03-13 01:38:18.382866 | orchestrator | 2026-03-13 01:38:18.382871 | orchestrator | + echo 2026-03-13 01:38:18.382875 | orchestrator | + echo '## DNS (API)' 2026-03-13 01:38:18.382880 | orchestrator | + echo 2026-03-13 01:38:18.382885 | orchestrator | + _tempest designate_tempest_plugin.tests.api.v2 2026-03-13 01:38:18.382890 | orchestrator | + local regex=designate_tempest_plugin.tests.api.v2 2026-03-13 01:38:18.382896 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex designate_tempest_plugin.tests.api.v2 --concurrency 16 2026-03-13 01:38:18.383201 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-13 01:38:18.385612 | orchestrator | + tee -a /opt/tempest/20260313-0138.log 2026-03-13 01:38:21.577566 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex designate_tempest_plugin.tests.api.v2 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-13 01:38:21.577669 | orchestrator | Did you mean one of these? 2026-03-13 01:38:21.577682 | orchestrator | help 2026-03-13 01:38:21.577690 | orchestrator | init 2026-03-13 01:38:21.869177 | orchestrator | + echo 2026-03-13 01:38:21.871332 | orchestrator | 2026-03-13 01:38:21.871404 | orchestrator | ## OBJECT-STORE (API) 2026-03-13 01:38:21.871416 | orchestrator | 2026-03-13 01:38:21.871424 | orchestrator | + echo '## OBJECT-STORE (API)' 2026-03-13 01:38:21.871430 | orchestrator | + echo 2026-03-13 01:38:21.871504 | orchestrator | + _tempest tempest.api.object_storage 2026-03-13 01:38:21.871514 | orchestrator | + local regex=tempest.api.object_storage 2026-03-13 01:38:21.871523 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.object_storage --concurrency 16 2026-03-13 01:38:21.871532 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-13 01:38:21.873090 | orchestrator | + tee -a /opt/tempest/20260313-0138.log 2026-03-13 01:38:25.486690 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.object_storage --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-13 01:38:25.486898 | orchestrator | Did you mean one of these? 2026-03-13 01:38:25.486914 | orchestrator | help 2026-03-13 01:38:25.486921 | orchestrator | init 2026-03-13 01:38:26.078309 | orchestrator | ok: Runtime: 0:01:55.069156 2026-03-13 01:38:26.099721 | 2026-03-13 01:38:26.099913 | TASK [Check prometheus alert status] 2026-03-13 01:38:26.638603 | orchestrator | skipping: Conditional result was False 2026-03-13 01:38:26.642359 | 2026-03-13 01:38:26.642541 | PLAY RECAP 2026-03-13 01:38:26.642687 | orchestrator | ok: 25 changed: 12 unreachable: 0 failed: 0 skipped: 4 rescued: 0 ignored: 0 2026-03-13 01:38:26.642756 | 2026-03-13 01:38:26.868383 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2026-03-13 01:38:26.871164 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-03-13 01:38:27.626966 | 2026-03-13 01:38:27.627165 | PLAY [Post output play] 2026-03-13 01:38:27.644466 | 2026-03-13 01:38:27.644600 | LOOP [stage-output : Register sources] 2026-03-13 01:38:27.706795 | 2026-03-13 01:38:27.707103 | TASK [stage-output : Check sudo] 2026-03-13 01:38:28.529783 | orchestrator | sudo: a password is required 2026-03-13 01:38:28.743548 | orchestrator | ok: Runtime: 0:00:00.014095 2026-03-13 01:38:28.759955 | 2026-03-13 01:38:28.760161 | LOOP [stage-output : Set source and destination for files and folders] 2026-03-13 01:38:28.785742 | 2026-03-13 01:38:28.785972 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-03-13 01:38:28.856888 | orchestrator | ok 2026-03-13 01:38:28.866388 | 2026-03-13 01:38:28.866539 | LOOP [stage-output : Ensure target folders exist] 2026-03-13 01:38:29.357447 | orchestrator | ok: "docs" 2026-03-13 01:38:29.357820 | 2026-03-13 01:38:29.661461 | orchestrator | ok: "artifacts" 2026-03-13 01:38:29.932971 | orchestrator | ok: "logs" 2026-03-13 01:38:29.955847 | 2026-03-13 01:38:29.956149 | LOOP [stage-output : Copy files and folders to staging folder] 2026-03-13 01:38:29.998825 | 2026-03-13 01:38:29.999204 | TASK [stage-output : Make all log files readable] 2026-03-13 01:38:30.342175 | orchestrator | ok 2026-03-13 01:38:30.352208 | 2026-03-13 01:38:30.352352 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-03-13 01:38:30.387368 | orchestrator | skipping: Conditional result was False 2026-03-13 01:38:30.404188 | 2026-03-13 01:38:30.404351 | TASK [stage-output : Discover log files for compression] 2026-03-13 01:38:30.429059 | orchestrator | skipping: Conditional result was False 2026-03-13 01:38:30.440603 | 2026-03-13 01:38:30.440759 | LOOP [stage-output : Archive everything from logs] 2026-03-13 01:38:30.495541 | 2026-03-13 01:38:30.495748 | PLAY [Post cleanup play] 2026-03-13 01:38:30.504555 | 2026-03-13 01:38:30.504684 | TASK [Set cloud fact (Zuul deployment)] 2026-03-13 01:38:30.559647 | orchestrator | ok 2026-03-13 01:38:30.570663 | 2026-03-13 01:38:30.570794 | TASK [Set cloud fact (local deployment)] 2026-03-13 01:38:30.605129 | orchestrator | skipping: Conditional result was False 2026-03-13 01:38:30.620103 | 2026-03-13 01:38:30.620254 | TASK [Clean the cloud environment] 2026-03-13 01:38:32.597861 | orchestrator | 2026-03-13 01:38:32 - clean up servers 2026-03-13 01:38:33.360972 | orchestrator | 2026-03-13 01:38:33 - testbed-manager 2026-03-13 01:38:33.442915 | orchestrator | 2026-03-13 01:38:33 - testbed-node-3 2026-03-13 01:38:33.536884 | orchestrator | 2026-03-13 01:38:33 - testbed-node-0 2026-03-13 01:38:33.633211 | orchestrator | 2026-03-13 01:38:33 - testbed-node-1 2026-03-13 01:38:33.723405 | orchestrator | 2026-03-13 01:38:33 - testbed-node-2 2026-03-13 01:38:33.817672 | orchestrator | 2026-03-13 01:38:33 - testbed-node-4 2026-03-13 01:38:33.904989 | orchestrator | 2026-03-13 01:38:33 - testbed-node-5 2026-03-13 01:38:34.001950 | orchestrator | 2026-03-13 01:38:34 - clean up keypairs 2026-03-13 01:38:34.020132 | orchestrator | 2026-03-13 01:38:34 - testbed 2026-03-13 01:38:34.045381 | orchestrator | 2026-03-13 01:38:34 - wait for servers to be gone 2026-03-13 01:38:44.917129 | orchestrator | 2026-03-13 01:38:44 - clean up ports 2026-03-13 01:38:45.106505 | orchestrator | 2026-03-13 01:38:45 - 10edf501-0373-498e-83cc-9fc83f93ca2d 2026-03-13 01:38:45.345340 | orchestrator | 2026-03-13 01:38:45 - 1f6c6a32-8f8f-4060-8cea-a65a19859844 2026-03-13 01:38:45.640629 | orchestrator | 2026-03-13 01:38:45 - 4bbebad7-d66d-4323-8d58-a58fe154918a 2026-03-13 01:38:45.854951 | orchestrator | 2026-03-13 01:38:45 - 5e05a1ea-72fa-4d49-86d9-bfebc9319a09 2026-03-13 01:38:46.262172 | orchestrator | 2026-03-13 01:38:46 - 96ab7177-1ad5-4cd1-965d-133ae248ee0e 2026-03-13 01:38:46.473043 | orchestrator | 2026-03-13 01:38:46 - 9b8676e4-64c5-4f77-b1a1-83e8b2677b31 2026-03-13 01:38:46.673907 | orchestrator | 2026-03-13 01:38:46 - d4ff417b-f2e4-403f-b174-02b447740d6d 2026-03-13 01:38:46.875749 | orchestrator | 2026-03-13 01:38:46 - clean up volumes 2026-03-13 01:38:46.997866 | orchestrator | 2026-03-13 01:38:46 - testbed-volume-4-node-base 2026-03-13 01:38:47.043159 | orchestrator | 2026-03-13 01:38:47 - testbed-volume-1-node-base 2026-03-13 01:38:47.079934 | orchestrator | 2026-03-13 01:38:47 - testbed-volume-3-node-base 2026-03-13 01:38:47.117139 | orchestrator | 2026-03-13 01:38:47 - testbed-volume-2-node-base 2026-03-13 01:38:47.162302 | orchestrator | 2026-03-13 01:38:47 - testbed-volume-5-node-base 2026-03-13 01:38:47.203155 | orchestrator | 2026-03-13 01:38:47 - testbed-volume-0-node-base 2026-03-13 01:38:47.241327 | orchestrator | 2026-03-13 01:38:47 - testbed-volume-manager-base 2026-03-13 01:38:47.281394 | orchestrator | 2026-03-13 01:38:47 - testbed-volume-0-node-3 2026-03-13 01:38:47.319563 | orchestrator | 2026-03-13 01:38:47 - testbed-volume-7-node-4 2026-03-13 01:38:47.361882 | orchestrator | 2026-03-13 01:38:47 - testbed-volume-5-node-5 2026-03-13 01:38:47.401025 | orchestrator | 2026-03-13 01:38:47 - testbed-volume-3-node-3 2026-03-13 01:38:47.440159 | orchestrator | 2026-03-13 01:38:47 - testbed-volume-1-node-4 2026-03-13 01:38:47.479053 | orchestrator | 2026-03-13 01:38:47 - testbed-volume-4-node-4 2026-03-13 01:38:47.517517 | orchestrator | 2026-03-13 01:38:47 - testbed-volume-8-node-5 2026-03-13 01:38:47.554079 | orchestrator | 2026-03-13 01:38:47 - testbed-volume-2-node-5 2026-03-13 01:38:47.590342 | orchestrator | 2026-03-13 01:38:47 - testbed-volume-6-node-3 2026-03-13 01:38:47.627340 | orchestrator | 2026-03-13 01:38:47 - disconnect routers 2026-03-13 01:38:47.752324 | orchestrator | 2026-03-13 01:38:47 - testbed 2026-03-13 01:38:49.310471 | orchestrator | 2026-03-13 01:38:49 - clean up subnets 2026-03-13 01:38:49.371584 | orchestrator | 2026-03-13 01:38:49 - subnet-testbed-management 2026-03-13 01:38:49.524249 | orchestrator | 2026-03-13 01:38:49 - clean up networks 2026-03-13 01:38:49.696751 | orchestrator | 2026-03-13 01:38:49 - net-testbed-management 2026-03-13 01:38:49.980039 | orchestrator | 2026-03-13 01:38:49 - clean up security groups 2026-03-13 01:38:50.018667 | orchestrator | 2026-03-13 01:38:50 - testbed-management 2026-03-13 01:38:50.144515 | orchestrator | 2026-03-13 01:38:50 - testbed-node 2026-03-13 01:38:50.248450 | orchestrator | 2026-03-13 01:38:50 - clean up floating ips 2026-03-13 01:38:50.282322 | orchestrator | 2026-03-13 01:38:50 - 81.163.193.64 2026-03-13 01:38:50.623821 | orchestrator | 2026-03-13 01:38:50 - clean up routers 2026-03-13 01:38:50.723223 | orchestrator | 2026-03-13 01:38:50 - testbed 2026-03-13 01:38:51.682803 | orchestrator | ok: Runtime: 0:00:20.655470 2026-03-13 01:38:51.687186 | 2026-03-13 01:38:51.687350 | PLAY RECAP 2026-03-13 01:38:51.687486 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-03-13 01:38:51.687547 | 2026-03-13 01:38:51.819707 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-03-13 01:38:51.820856 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-03-13 01:38:52.594773 | 2026-03-13 01:38:52.594959 | PLAY [Cleanup play] 2026-03-13 01:38:52.611112 | 2026-03-13 01:38:52.611248 | TASK [Set cloud fact (Zuul deployment)] 2026-03-13 01:38:52.668589 | orchestrator | ok 2026-03-13 01:38:52.678259 | 2026-03-13 01:38:52.678413 | TASK [Set cloud fact (local deployment)] 2026-03-13 01:38:52.723370 | orchestrator | skipping: Conditional result was False 2026-03-13 01:38:52.740623 | 2026-03-13 01:38:52.740782 | TASK [Clean the cloud environment] 2026-03-13 01:38:53.927682 | orchestrator | 2026-03-13 01:38:53 - clean up servers 2026-03-13 01:38:54.416345 | orchestrator | 2026-03-13 01:38:54 - clean up keypairs 2026-03-13 01:38:54.435046 | orchestrator | 2026-03-13 01:38:54 - wait for servers to be gone 2026-03-13 01:38:54.472153 | orchestrator | 2026-03-13 01:38:54 - clean up ports 2026-03-13 01:38:54.552074 | orchestrator | 2026-03-13 01:38:54 - clean up volumes 2026-03-13 01:38:54.612938 | orchestrator | 2026-03-13 01:38:54 - disconnect routers 2026-03-13 01:38:54.639141 | orchestrator | 2026-03-13 01:38:54 - clean up subnets 2026-03-13 01:38:54.657614 | orchestrator | 2026-03-13 01:38:54 - clean up networks 2026-03-13 01:38:54.820715 | orchestrator | 2026-03-13 01:38:54 - clean up security groups 2026-03-13 01:38:54.856056 | orchestrator | 2026-03-13 01:38:54 - clean up floating ips 2026-03-13 01:38:54.881171 | orchestrator | 2026-03-13 01:38:54 - clean up routers 2026-03-13 01:38:55.286575 | orchestrator | ok: Runtime: 0:00:01.361843 2026-03-13 01:38:55.290463 | 2026-03-13 01:38:55.290634 | PLAY RECAP 2026-03-13 01:38:55.290767 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-03-13 01:38:55.290888 | 2026-03-13 01:38:55.423410 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-03-13 01:38:55.426084 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-03-13 01:38:56.216722 | 2026-03-13 01:38:56.216902 | PLAY [Base post-fetch] 2026-03-13 01:38:56.233143 | 2026-03-13 01:38:56.233282 | TASK [fetch-output : Set log path for multiple nodes] 2026-03-13 01:38:56.288933 | orchestrator | skipping: Conditional result was False 2026-03-13 01:38:56.304941 | 2026-03-13 01:38:56.305214 | TASK [fetch-output : Set log path for single node] 2026-03-13 01:38:56.365520 | orchestrator | ok 2026-03-13 01:38:56.376285 | 2026-03-13 01:38:56.376463 | LOOP [fetch-output : Ensure local output dirs] 2026-03-13 01:38:56.886169 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/e7d915585cc84a62ad88b8cff0bf3e53/work/logs" 2026-03-13 01:38:57.180617 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/e7d915585cc84a62ad88b8cff0bf3e53/work/artifacts" 2026-03-13 01:38:57.462808 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/e7d915585cc84a62ad88b8cff0bf3e53/work/docs" 2026-03-13 01:38:57.483036 | 2026-03-13 01:38:57.483191 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-03-13 01:38:58.413072 | orchestrator | changed: .d..t...... ./ 2026-03-13 01:38:58.413425 | orchestrator | changed: All items complete 2026-03-13 01:38:58.413492 | 2026-03-13 01:38:59.189897 | orchestrator | changed: .d..t...... ./ 2026-03-13 01:38:59.955746 | orchestrator | changed: .d..t...... ./ 2026-03-13 01:38:59.987741 | 2026-03-13 01:38:59.987914 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-03-13 01:39:00.024725 | orchestrator | skipping: Conditional result was False 2026-03-13 01:39:00.029532 | orchestrator | skipping: Conditional result was False 2026-03-13 01:39:00.050112 | 2026-03-13 01:39:00.050237 | PLAY RECAP 2026-03-13 01:39:00.050317 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-03-13 01:39:00.050363 | 2026-03-13 01:39:00.181834 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-03-13 01:39:00.184471 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-13 01:39:00.943576 | 2026-03-13 01:39:00.943737 | PLAY [Base post] 2026-03-13 01:39:00.958333 | 2026-03-13 01:39:00.958465 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-03-13 01:39:02.002763 | orchestrator | changed 2026-03-13 01:39:02.013862 | 2026-03-13 01:39:02.014041 | PLAY RECAP 2026-03-13 01:39:02.014123 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-03-13 01:39:02.014201 | 2026-03-13 01:39:02.136437 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-13 01:39:02.139051 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-03-13 01:39:02.940333 | 2026-03-13 01:39:02.940503 | PLAY [Base post-logs] 2026-03-13 01:39:02.951394 | 2026-03-13 01:39:02.951526 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-03-13 01:39:03.403665 | localhost | changed 2026-03-13 01:39:03.416899 | 2026-03-13 01:39:03.417123 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-03-13 01:39:03.444088 | localhost | ok 2026-03-13 01:39:03.448804 | 2026-03-13 01:39:03.448956 | TASK [Set zuul-log-path fact] 2026-03-13 01:39:03.465897 | localhost | ok 2026-03-13 01:39:03.475639 | 2026-03-13 01:39:03.475765 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-13 01:39:03.511639 | localhost | ok 2026-03-13 01:39:03.516798 | 2026-03-13 01:39:03.516971 | TASK [upload-logs : Create log directories] 2026-03-13 01:39:04.043014 | localhost | changed 2026-03-13 01:39:04.045821 | 2026-03-13 01:39:04.045924 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-03-13 01:39:04.557432 | localhost -> localhost | ok: Runtime: 0:00:00.007147 2026-03-13 01:39:04.563102 | 2026-03-13 01:39:04.563245 | TASK [upload-logs : Upload logs to log server] 2026-03-13 01:39:05.129380 | localhost | Output suppressed because no_log was given 2026-03-13 01:39:05.134173 | 2026-03-13 01:39:05.134397 | LOOP [upload-logs : Compress console log and json output] 2026-03-13 01:39:05.189167 | localhost | skipping: Conditional result was False 2026-03-13 01:39:05.194217 | localhost | skipping: Conditional result was False 2026-03-13 01:39:05.206714 | 2026-03-13 01:39:05.206961 | LOOP [upload-logs : Upload compressed console log and json output] 2026-03-13 01:39:05.253307 | localhost | skipping: Conditional result was False 2026-03-13 01:39:05.253884 | 2026-03-13 01:39:05.257783 | localhost | skipping: Conditional result was False 2026-03-13 01:39:05.271127 | 2026-03-13 01:39:05.271383 | LOOP [upload-logs : Upload console log and json output]